by Judith Curry
Science has been extraordinarily successful at taking the measure of the world, but paradoxically the world finds it extraordinarily difficult to take the measure of science — or any type of scholarship for that matter. – Stephen Curry
The problem
The Higher Education Funding Council for England are reviewing the idea of using metrics (or citation counts) in research assessment.
At occamstypewriter, Stephen Curry writes:
The REF has convulsed the whole university sector — driving the transfer market in star researchers who might score extra performance points and the hiring of additional administrative staff to manage the process — because the judgements it delivers will have a huge effect on funding allocations by HEFCE for at least the next 5 years.
This issue of metrics has a stark realization at Kings College London, where they are firing 120 scientists. The main criteria for the firing appears to be the amount of grant funding.
Using metrics to assess academic researchers is hardly something new. In my experience with university promotion and tenure, the number of publications, the number of citations (and H-index), and the research funding dollars all receive heavy consideration. It is my impression that the more prestigious institutions pay less attention to such metrics, and rely more on peer review (both internal and external). In my experiences on the AMS Awards Committee and the AGU Fellows Selection committee, the number of publications and H-index are considered prominently.
What are the responses of scientists to this? Well most just play the game in a way that insures they maintain job security. There are a few interesting perspectives on all this that have emerged in recent weeks:
The most thought provoking essay is from The Disorder of Things, excerpts:
Whilst metrics may capture some partial dimensions of research ‘impact’, they cannot be used as any kind of proxy for measuring research ‘quality’. We suggest that it is imperative to disaggregate ‘research quality’ from ‘research impact’ – not only do they not belong together logically, but running them together itself creates fundamental problems which change the purposes of academic research.
Why do academics cite each others’ work? This is a core question to answer if we want to know what citation count metrics actually tell us, and what they can be used for. Possible answers to this question include:
- It exists in the field or sub-field we are writing about
- It is already well-known/notorious in our field or sub-field so is a useful reader shorthand
- It came up in the journal we are trying to publish in, so we can link our work to it
- It says something we agree with/that was correct
- It says something we disagree with/that was incorrect
- It says something outrageous or provocative
- It offered a specifically useful case or insight
- It offered a really unhelpful/misleading case or insight
[Citations] cannot properly differentiate between ‘positive’ impact or ‘negative’ impact within a field or sub-discipline – i.e. work that ‘advances’ a debate, or work that makes it more simplistic and polarised. Indeed, the overall pressure it creates is simply to get cited at all costs. This might well lead to work becoming more provocative and outrageous for the sake of citation, rather than making more disciplined and rigorous contributions to knowledge.
On ‘originality’ – work may be cited because it is original, but it may also be cited because it is a more famous academic making the same point. Textbooks and edited collections are widely cited because they are accessible – not because they are original. Moreover, highly original work may not be cited at all because it has been published in a lower-profile venue, or because it radically differs from the intellectual trajectories of its sub-field. There is absolutely no logical or necessary connection between originality and being cited.
Using citation counts will systematically under-count the ‘significance’ of work directed at more specialised sub-fields or technical debates, or that adopts more dissident positions. [If] we understand ‘significance’ as ‘the development of the intellectual agenda of the field’, then citation counts are not an appropriate proxy.
To the extent that more ‘rigorous’ pieces may be more theoretically and methodologically sophisticated – and thus less accessible to ‘lay’ academic and non-academic audiences, there are reasons to believe that the rigour of a piece might well be inversely related to its citation count.
An article in Times Higher Education reports:
Academics’ desire to be judged on the basis of their publication in high-impact journals indicates their lack of faith in peer review panels’ ability to distinguish genuine scientific excellence, a report suggests.
Specifically with regards to using researchfunding as a metric:
Philip Moriarty has a post How Universities Incentivise Academics to Short-Change the Public. Excerpts:
What’s particularly galling, however, is that the annual grant income metric is not normalised to any measure of productivity or quality. So it says nothing about value for money. Time and time again we’re told by the Coalition that in these times of economic austerity, the public sector will have to “do more with less”. That we must maximise efficiency. And yet academics are driven by university management to maximise the amount of funding they can secure from the public pot.
Cost effectiveness doesn’t enter the equation. Literally.
Consider this. A lecturer recently appointed to a UK physics department, Dr. Frugal, secures a modest grant from the Engineering and Physical Sciences Research Council for, say, £200k. She works hard for three years with a sole PhD student and publishes two outstanding papers that revolutionise her field.
Her colleague down the corridor, Prof. Cash, secures a grant for £4M and publishes two solid, but rather less outstanding, papers.
Who is the more cost-effective? Which research project represents better value for money for the taxpayer?
…and which academic will be under greater pressure from management to secure more research income from the public purse?
And finally, a letter to the editor of PNAS entitled Systemic addiction to research funding.
Trending
Daniel McCabe has an essay on The Slow Science Movement. Excerpts:
Today’s research environment pushes for the quick fix, but successful science needs time to think.
There is a growing school of thought emerging out of Europe that urges university-based scientists to take careful stock of their lives – and to try to slow things down in their work.
According to the proponents of the budding “slow science” movement, the increasingly frenetic pace of academic life is threatening the quality of the science that researchers produce. As harried scientists struggle to churn out enough papers to impress funding agencies, and as they spend more and more of their time filling out forms and chasing after increasingly elusive grant money, they aren’t spending nearly enough time mulling over the big scientific questions that remain to be solved in their fields.
Among those who have sounded the alarm is University of Nice anthropologist Joël Candau. “Fast science, like fast food, favours quantity over quality,” he wrote in an appeal he sent off to several colleagues in 2010. “Because the appraisers and other experts are always in a hurry too, our CVs are often solely evaluated by their length: how many publications, how many presentations, how many projects?”
From Dylans Desk: Watch this multi-billion-dollar industry evaporate overnight. Excerpts:
Imagine an industry where a few companies make billions of dollars by exerting strict control over valuable information — while paying the people who produce that information nothing at all. That’s the state of academic, scientific publishing today. And it’s about to be blown wide open by much more open, Internet-based publishers.
Indeed, Academia.edu, PLOS, and Arxiv.org are doing something remarkable: They’re mounting a full-frontal assault on a multi-billion-dollar industry and replacing it with something that makes much, much less money. They’re far more efficient and fairer, and they vastly increase the openness and availability of research information. I believe this will be nothing but good for the human race in the long run. But I’m sure the executives of Elsevier, Springer, and others are weeping into their lattes as they watch this industry evaporate. Maybe they can get together with newspaper executives to commiserate.
Dorothy Bishop has a post Blogging as post publication peer review: reasonable or unfair? Excerpts:
Finally, a comment on whether it is fair to comment on a research article in a blog, rather than going through the usual procedure of submitting an article to a journal and having it peer-reviewed prior to publication. The authors’ reactions: “The items you are presenting do not represent the proper way to engage in a scientific discourse”.
I could not disagree more. [W]hat has come to be known as ‘post-publication peer review’ via the blogosphere can allow for new research to be rapidly discussed and debated in a way that would be quite impossible via traditional journal publishing. In addition, it brings the debate to the attention of a much wider readership. I don’t enjoy criticising colleagues, but I feel that it is entirely proper for me to put my opinion out in the public domain, so that this broader readership can hear a different perspective from those put out in the press releases. And the value of blogging is that it does allow for immediate reaction, both positive and negative.
From occamstypewriter on altmetrics:
One thing that has changed of course is the rise of alternative metrics — or altmetrics — which are typically based on the interest generated by publications on various forms of social media, including Twitter, blogs and reference management sites such as Mendeley. They have the advantage of focusing minds at the level of the individual article, which avoids the well known problems of judging research quality on the basis of journal-level metrics such as the impact factor.
Social media may be useful for capturing the buzz around particular papers and thus something of their reach beyond the research community. There is potential value in being able to measure and exploit these signals, not least to help researchers discover papers that they might not otherwise come across — to provide more efficient filters as the authors of the altmetrics manifesto would have it. But it would be quite a leap from where we are now to feed these alternative measures of interest or usage into the process of research evaluation. Part of the difficulty lies in the fact that most of the value of the research literature is still extracted within the confines of the research community. That may be slowly changing with the rise of open access, which is undoubtedly a positive move that needs to be closely monitored, but at the same time — and it hurts me to say it — we should not get over-excited by tweets and blogs.
JC reflections
Research universities in the 21st century are in a transition period, as the fundamental value proposition of the research university is being questioned in the face of funding pressures. Its time to start re-imagining the 21st century research university. More on this topic will be forthcoming.
My main reflection on metrics is that you get what you count. If you count numbers, then numbers are what you will get. If you want originality, significance, robustness, then counting citations, dollars, and numbers of publications won’t help. If you want impact beyond the ivory tower, such as research that stimulates or supports industry or informs policy making, then counting won’t help either.
In looking back at my own history of funding, publication productivity and citations, here is what I see. My time at University of Colorado (mid 1990’s to 2002) stands out as the period where I brought in large research budgets ($1M+ per year) and cranked out a large number of papers, only a few of which I regard as important. I was definitely in ‘no time to think mode’, spending my time writing grant proposals and editing graduate student manuscripts. With regards to citations, my papers with the largest number of citations are the 2005 hurricane paper and a review article on Arctic clouds. My papers that I truly regard to be scientifically significant have relatively few citations, although the citations on these fundamental papers keep trickling in.
My own rather extended postdoc period (4 years) allowed me lots of time to think; I despair for the current generation of young scientists who are under enormous pressure to crank out the publications and to start bringing in research funds so they can be competitive for a faculty position.
I suspect that the dynamics of all this will change, largely fueled by the internet. So does anyone wonder why academic climate researchers crank out lots of papers, try to get them published in Nature, Science, or PNAS, and don’t worry too much whether their paper will stand the test of time? Scientists are following their reward structure – from their employees and from professional societies that dish out awards.
Your name is used to describe something like watts and kelvin.
I propose the “curry”, a unit for measuring the temperature of a debate. The climate change debate generally tends to be in the megacurry range, or about 1 vindaloo.
Thank you, Professor Curry, for your efforts.
I am convinced, however, that integrity cannot be restored to science unless we first accept that Aston gave valid reasons to fear Earth might be accidentally converted into a star in August 1945.
http://veksler.jinr.ru/becquerel/text/books/aston-lecture.pdf
If we forgive those who deceived us for the past sixty-nine years (2014 – 1945 = 69 yrs), then government science may again be a tool to benefit society as a whole rather than an instrument of our political leaders.
You only need to read the last paragraph of Aston’s 1922 Nobel Prize Lecture to know that world leaders had legitimate concerns in late August 1945 when they realized they had lost control of information on how to build atomic bombs.
JoNova and David Evans are in the process of discovering Earth’s climate is driven by the Sun’s deep-seated magnetic fields (and the X Force) from the Sun’s compact innards (Fe-mantle and/or pulsar core).
http://joannenova.com.au/2014/06/big-news-part-iv-a-huge-leap-understanding-the-mysterious-11-year-solar-delay/
That was also the conclusion of this paper by Professors Barry Ninham, Stig Friberg and I [“Super-fluidity in the solar interior: Implications for solar eruptions and climate”, Journal of Fusion Energy 21, 193-198 (2002). http://www.springerlink.com/content/r2352635vv166363/ http://www.omatumr.com/abstracts2003/jfe-superfluidity.pdf
“What is the measure of scientific ‘success’?”
Easy,
1) Getting other scientists to consider, discuss, rebut or build on your work.
2) Succeeding in predicting phenomena from general principles derived from basic physical laws.
3) Getting supporters of the paradigm du jour to admit you have successfully predicted phenomena from general principles derived from basic physical laws.
No, it involves the ability to get editors fired from journals, the ability to get your smears of other scientists into the press, and the aggregation of a large number of blindly loyal followers who don’t understand a thing you do, but defend it relentlessly.
Comparison of theory to data has nothing to do with it.
Unfortunately the scientific community, in the majority of disciplines have gradually evolved, to some degree, from what Tallbloke has written to that of TJA.
It is due in part to the nature of politics and funding.
It is a manifestation of what Ike warned about.
Talking with a team of medical doctors in different areas of expertise in a casual setting today, a general conclusion upon which they all seemed to agree is that the general finding of a group will be that of the loudest person in the room. :(
TJA: You must have read ‘Against Method’ by Paul Feyerabend. ;-)
Kicking, biting, scratching – Anything goes.
Only very occasionally is an anti-paradigm theory strong enough to overcome the inertia of vested institutional interest.
I believe I’m getting closer. My co-researcher and I have been progressing our solar-planetary theory steadiy in the background. Along with the latest paper from McCracken et al and Rick Salvador’s model we have a compelling explanation for the C14 and Be10 record which the hockey jockey’s don’t come within 9900 years of matching.
Scientific ‘success’ lies in discovering not inventing nature’s rules.
Dr Curry: If you had an unlimited budget to do climate research, with no constraints, how would that money be spent? Would the research money mostly go toward hiring people, or mostly for plant and equipment?
Good question. I would make sure the observations are funded (satellite, monitoring, and important process experiments). I would then spend funding on basic research related to climate dynamics (none of impacts stuff or endless analysis of model output) including solar physics. We need to entrain physicists, mathematicians, engineers, chemists and computer scientists for an infusion of new approaches to understanding the climate system.
Thank you Dr Curry. Maybe an independently funded project is needed, with the goals you just outlined. Need to find a few billionaires interested in more knowledge and a better world.
I hereby nominate Dr.Curry for climate czar. If we have to have socialized science, we might as well have someone lead it who has a clue about how it should be done.
—–Add to the list Statisticians!
If you feel it’s possible, would you explain why you feel this sort of thing isn’t currently being funded? As in: internal vs external, science vs politics etc. Thanks.
“We need to entrain physicists, mathematicians, engineers, chemists and computer scientists for an infusion of
newscientific approaches to understanding the climate system.”Fixed that for ya!
Observations are the cornerstones of science, yet the funding and expertise to operate a credible climate data network is constantly under budget and personnel stress. How many papers would be published without data?
Many of these networks were designed many decades ago, based on an agency’s mission, not climate change detection. You really need to understand the data before you use it. Without painstaking review of the station metadata and data values, you run great risks with your analysis.
RLS asked a very good question, Dr. Curry’s answer is just the first step in a direction that some of the great scientists have followed to build our civilization.
Judith, This is exactly right.
I’d also suggest that’s something was done about data quality, data definition and semantics.
I’m interested in seeing how people find fault with this recipe.
I’m sure some will.
Getting a robust set of observations with genuine global coverage is the most crucial step, and one that still needs to be taken. We should still be at the stage of building and calibrating the wind tunnel (so to speak), but too many people give in to the temptation of trying to use it before it is ready.
rls
See Bjorn Lomborg and the Copenhagen Consensus for similar perspectives on how best to spend the budget for climate and energy.
One measure of success would be to cut university administration by 90%.
See: The Clever Stunt Four Professors Just Pulled to Expose the Outrageous Pay Gap in Academia
4 Profs to replace 1 president.
Consensus Climate Science is Dead in the Water.
A new, diverse team, with no Consensus Constraints will come to understand Climate Natural Variability.
You grade student papers as a measure of their success, right? So what you need is to get your papers graded by those who know more than you do about the subject. Of course that means engineers must start grading the papers produced by scientists. I’ve been doing that since I retired from Dell 15 years ago as service to mankind. ;-)
Was the”… service to mankind” your retirement?
Is this a joke, the joke-intro?
Which of the four is an actual sentence?
— John Moore
You have to sleep sometime, Curry.
I had my dad check my homework and papers through college. When some of my calculus classmates questioned that, I explained he was a Chemical Engineer and understood the subject not only better than me, but the TA.
Must say though it was tough when he marked problems I got wrong. He always used a pen, which required me recopying the entire work, even if there was a single error.
Dr. Ioannidis [“Why Most Published Research Findings Are False,” in PLOS Medicine] found that the more popular an idea becomes and the more researchers the idea attracts the worse the resulting science will be. When you compare the assumptions used by Ioannidis to what we see in climate science, the reliability of global warming research can be expected to be far worse and so it is. The bias of Western AGW researchers isn’t a tendency it’s a given so climate researchers will come up with wrong findings all of the time not just most of the time. And, among all possible motivations climatists are actually being paid out of the limitless purse of the government and academia’s promise of lifetime tenure to make evidence and models dance to any tune they wish to play and accordingly, the climatists will always succeed in “proving wrong theories right,” whatever it takes.
Patents count as well.
It took me more than 25 years to build up the knowledge to get to the first stage of drug design; I don’t think the younger generation will be allowed to incubate for this long.
Another tip is that after the new Dean has laid out his strategic vision for the institute and asks “Any questions?”, do not ask “Are you insane?”, as although eveyone in the room wanted to know the answer, this is not a good career move
Three prescient articles and lectures:
• Alberts et al: “Rescuing US Biomedical Research from its Systemic Flaws“
• Resnick: “Systemic Addiction to Research Funding“
• Oreskes: “Scientific Consensus and the Role and Character of Scientific Dissent“
All agree the system is breaking down, all agree that changes are coming, none can foresee what these changes can be/should be/will be.
Bottom Line “Deans and Chairs can count, but they can’t read!”
Bottom Line “Deans and Chairs can count, but they can’t read!”
\scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}
??????????????????????
I have been working on these issues for several years and the calls for revolution are both aimless and pointless (because they are aimless). The Internet has already changed science, but not by making the fundamental structures go away. The next big change will be when the US Public Access program finally hits, but that too will not change the fundamentals. Happily science is safe for the forseeable future.
For those who preach revolution bear in mind that we are talking about the activities of millions of people around the world, who publish a million articles a year. Fads are just that, no matter how loud.
You have been working on this for years.
Whats the measure of your success
First I was invited to blog for the Society for Scholarly Publishing.
http://scholarlykitchen.sspnet.org/author/dwojick/
Then I started a subscription newsletter.
http://insidepublicaccess.com/issues.html
Along the way I have been funded by several groups.
Making a living is a first order measure of success.
so total failure in terms of measureable changes
Why so harsh Mosher?
harsh?
Water is wet is not harsh.
david claims to have worked on this problem.
there is no measurable success.
That says nothing about david or the quality of his work.
many problems are worked on with no measurable success.
Gosh, decades of trying to reduce uncertainty on sensitivity.
is noting that harsh? nope. water is wet.
Not sure what you mean by failure, Mosher. No one is measuring the changes so a lack of measured (not measureable) changes is no failure on my part. Not measuring something does not mean it is not changing. No one is measuring the growth of the trees in the forest yet they grow anyway. They do not fail to grow, they just fail to be measured.
Einstein predicting gravitational bending of light and having it proven by measurement = scientific success.
50+ global warming models being proven 97% wildly incorrect on one side by actual temps = failure.
The fact that anyone who said the models were greatly overestimating future warming was called a “denier” = irony.
When stepping outside “what they do” scientists are very often hampered by the very things that can make them terrific at what they do. A great scientist can be far too literal, mechanistic and even simplistic when faced with complexities and unknowns. Some actually believe that whatever they can’t get their head around must be “chaos”.
When a science is incomplete to the point of almost not existing that is when we see scientists at their worst: they operate under the disastrous assumption that best available knowledge is adequate knowledge.
The sane approach is to let people do what they do best with patience and freedom. Accept the pottiness that goes along with any extraordinary mental or intellectual facility – and end this monstrosity, this barbarity, called Publish-or-Perish.
Patience and freedom cannot make promotion decisions. Not everyone can become a full professor., nor an associate professor, etc. The present system is about decision making. If you do not have an alternative way of making these decisions then you have nothing to offer. Nor is wishing decisions away helpful.
A lack of promotion decisions? Don’t tell me civilisation is now facing Peak Promotion Decisions!
Sorry but I do not understand your reply. I said nothing about a lack of promotion decisions, or funding decisions, which are the other big case.
That’s okay, David. I wasn’t interested in lack of promotion decisions either. But if you’d like to check out any field of study with the word “climate” or “environment” in its title (we’ll leave “gender” out of it for now) you’ll see where money should NOT go but DOES go, in prolific quantities. And if we don’t stop these people from their dogma-based publishing then the industrialised West will be a-perishing.
Here’s one good rule of thumb for academia: Make Nothing Sexy.
John Gall, in _Systemantics_, covers this in a chapter entitled “Administrative Encirclement.”
How administrators take over productive science.
If you don’t read this, you won’t get anywhere in pondering incentives.
The incentive science needs is curiosity, and has nothing to do with administrators and metrics, who do their best to stamp it out in the process of taking over.
I don’t see it in google. it’s about a professor interested in angiospores and the need to write out his objectives for the next year.
Science is an industry that gets hundreds of billions of dollars a year. This requires administration.
We in the UK and US the university administrators have seized power from the academics, and if you doubt this try this test;
Which department/block has the best carpets/office furniture and most modern phones/computers?
First of all, the top administrators (President, Provost, etc.) have to be the bosses. Second, in all the cases I know of the faculty have a lot of power. Universities are large corporations.
In the 6 universities that I’ve been affiliated with, I have seen substantially different ‘power structures’, with some universities being very top down, and others where faculty have a lot of power. My current institution (Georgia Tech) was of the top down variety; University of Colorado was more bottom up.
The place of the university is not to generate research funds, any more than it is to field a top flight football or basketball team. The place of a university is, or should be, to teach.
To the extent that research can be conducted to facilitate that process it should be welcomed. But the whole “scientific research” industry is a product of government taking ever more control of what passes for education.
The US federal government took over the student loan industry, thus driving up the cost of an education. I worked my way through undergrad at a fairly decent university. There is no way my son or daughter could do the same.
The progressives who run the government have also found that “science” can be used to give them great propaganda to help them maintain and increase their power. From climate, to health, to the environment, to technology, the government decides who gets funded, and therefore who gets hired, retained, tenured….
Give them5 more years, and the government is going to start dictating university level curricula, just like they are truing to do at the elementary and secondary level already.
When you feed at the government trough, the government sets the menu.
The book “Coming Apart” shows that often the influential people in academia, government, and media are close; in mind, and in social context -Crony Grantism.
There is an entire body of research out there on how to fix science. A fundamental first is that what is needed is enough baseline funding for every scientist to do some science without having to compete for a grant. Who else has a job where you need to get outside grants to do what your salary pays you to do? Imagine if Walmart told every newly hired cashier, “Now go out and get a grant for the cash register you will need.” Or imagine a newly hired pilot for American Airlines who is now told, “Great you can start flying as soon as you get a grant to buy yourself an airplane”. Yet that is how scientists are supposed to do it. But how to pay for universal baseline funding for everyone who is on a salary? It is so easy! Example: “Cost of the NSERC Science Grant Peer Review System Exceeds the Cost of Giving Every Qualified Researcher a Baseline Grant.” Accountability in Research: Policies and Quality Assurance 26 Feb Volume 16, Issue 1, 2009. Dismantle the bloated administrative oversight and use it for baseline funding and departmental level support for all the researchers. My Alma Mater recently created a “Dean of Professionalism” whose job includes creating and overseeing a dress code for scientists. I kid you not! Let’s start by taking the salary for such deans and putting it into baseline funding. It’s not like the solutions aren’t out there, well researched, cited, and published. Science as a system simply ignores the truth. And I certainly don’t expect my little rant to change anything.
if you want to FIX science, get rid of Consensus.
Real Science is always Skeptical.
I don’t think science is in decline, much, but I do think the institutions representing (or claiming to represent) science have gone through a bad patch that has lasted a couple of decades.
Maybe the internet will fix the publishing problems. I would rather see professional networking fix the scientist issue. Which it could.
Most importantly, I am very hopeful that MOOCs can fix the problems with universities. It should give academics another metric that they can use instead of cite counts and publications. Sadly, it will then bring in ‘star power’ as a metric to be used against the.
“How many hits does your lecture have?”
When I started as a scientist every department provided their researchers with a telephone, postage, electricity, furniture, and a shared administratorve staff who did typing and such. The department had two full time technicians and some heavy duty equipment (including a transmission electron microscope) that was shared by all. When I left the newly hired professor was shown an empty wood framed space in a new building. Their first assignment was to get a grant to put in drywall, doors, hook up the lights and the plumbing and buy furniture, and they had to pay 15% of each grant into the university as “overhead” to cover the cost of janitorial service.
“What is the measure of scientific ‘success’?”
Science has become industry. Follow the money and consult an economist
When I started science a typical grant application was two pages and the department head signed off to show you actually worked in a university. When I left the letter of intent took 15 pages, 12% were invited to apply for the grant. The grant is itself was nearly 100 pages and took 6-8 weeks to write if you were lucky enough to get through the first hurdle and it had to be reviewed by staff from two different offices of deans of research before being finally sent off.
Are there professional grant proposal writing services?
Are there paid lobbyists for use with (scientific) funding agencies?
Of course there are! After I was shown the door one of the deans where I was exited invited me to come back and join his group of professional grant writers. They charge a lot.
“… join his group of professional grant writers. They charge a lot.”
Now there is a great cottage industry.
Think of the cost of what gets called “climate science” and the trillions that turn on its theories.
Now, every theory about climate that I’ve ever heard involves the deep hydrosphere, and in a big way. So the deep hydrosphere has a traffic problem? You wish.
Mawson went to the Antarctic to find out stuff. (That’s the place where the melty bits are also the geologically active bits – but we’re only supposed to talk about the melty, ’cause it’s on top and easy to see without going anywhere.) What happens in Antarctic exploration now? A bungling zealot with a manic laugh and flair for selling biochar goes there to affirm a dogma – and can’t get past the ice. Peer review that, suckers.
Nope, curiosity and empiricism are on the fade. Better watch out.
Why do we do what we do? Why do some of us risk lives defending the nation? Why do we work behind security barriers which preclude publication? Good questions, but many do.
Those of us scientists who have been through a war and lost good friends, have easy answers to these questions, but count ourselves luck to have survived.. Obviously there are more important things in life than publication.
We do try to separate our professional lives from our private lives with various degrees of success. But to be a good scientist, you have to do it 24/7, publication is just a privilege you might occasionally enjoy. This is a competitive world in which rewards come rarely to most of us. Sorry I can’t provide better answers to the above.
Maybe universities could cooperatively review papers and evaluate them using better scientific criteria than cites? It’s a tough problem due to the momentum that develops in a field (see climate science) and makes it difficult for mavericks who might swim against the current but in fact be right. But cites don’t mean much, AFAICT.
Yes, let’s appoint a dean of cooperativety reviewing! Can’t be any worse than a dean of dress code.
I agree with FTTW. Universities already pre-review grant applications to make sure they are on-target for priority areas and suppress anyone with a new idea. A better idea would be to let academics come up with the ideas that need to be explored and send the administrators to the unemployed.
At least in the social sciences there is an imbalance in the intensity of incentives to write and the incentives to read. The attention economy constraint that we all face applies with a vengeance to academia. People are desperate for rationales to not read things, even in their own fields, and trying to get promotion, publication, and granting decisions based on thoughtful assessments of articles is almost impossible. Prof. Curry reflects this reality with her observation that her most important papers are the least read and cited. One can flip this on its head and note that if you track down citations of your own work, the overwhelming majority are incidental and have almost no engagement with the thrust of what you were doing.
My assessment is that we need to rebalance the incentive structure to reward more people to continuously synthesize the state of research in a given field, and perhaps also translate results for those in other fields. There can be a strong symbiosis between the prolific publishers and the omnivorous synthesizers–one hallway conversation between the two can catalyze new insights or eliminate huge amounts of duplicative effort.
Thank you for this:
My assessment is that we need to rebalance the incentive structure to reward more people to continuously synthesize the state of research in a given field, and perhaps also translate results for those in other fields. There can be a strong symbiosis between the prolific publishers and the omnivorous synthesizers–one hallway conversation between the two can catalyze new insights or eliminate huge amounts of duplicative effort.
The topic of a forthcoming post: The Art of Integration
I hope the “Art of Integration” post will be relevant to and focused on solving the real world problem – i.e. delivering the science that is relevant for policy analysis.
How about a discussion in the comments section of a blog? Ignoring all the surrounding noise?
an interesting comparison/contrast with one prominent case in the humanities (I’m not sure how common this is, but I am aware of a number of people in Philosophy who attain tenure based upon only a few published articles, where some departments emphasize quality over quantity):
the eminent American philosopher John Rawls received tenure at Cornell and then (soon after) MIT at a time when he had published ‘only’ 3 journal articles and 3 book reviews:
http://adrianblau.files.wordpress.com/2013/06/rawls-publications.pdf
One may argue that this is merely an extraordinary case for soneone judged early on as bearing great future promise, but it may also be a suggestion for many fields that quality should matter much more than quantity.
Rawls did not publish his seminal “A Theory of Justice” until he was 50 years old, although some of his articles preceding it were regarded as highly influential.
Still, publishing ‘only’ 1-2 articles per year (less in the early years), his university career would have bee derailed early on according to number-crunching counts of papers. Instead he came to be regarded as one of the most eminent philosophers of the 20th century (regardless of whether one agrees with many of his arguments or not). Interesting case, I think….
Can we give big bonus points to the guys that highlight Corregenda? I always though a Corregenda should be about 25 negative brownie points but I hardly see anyone referencing them.
Actually, measuring success in chemistry at pharmaceutical companies is a huge problem. Medicinal chemists formerly at Astra-Zeneca described a dysfunctional incentive system that favored forwarding large numbers of barely tweaked “back-up” compounds (derived from high-throughput screening) that usually turned out to fail whenever the lead compound failed (so providing little actual “back-up”). Since almost all drug candidates fail, distinguishing good and bad performance in the drug-development system is almost as hard as judging the contributions of pure researchers. Good work and bad work both usually lead to failure.
This kind of bs about how to measure success only exists in soft sciences.
Can you imagine chemists arguing about how to measure success?
+1
+1
Three Mikes in a row!
You can often see them puching the air in a departmental mass-spectrometry lab. I was gobsmacked by a guy who could recognise the tin isotope patterns at 10 yards.
3rd Mike – try saying the names of the first two out loud…
Anathema to the Scientific Method, Popper / Einstein,
One goddam counter example knocks out yr theory
as a contender.
H/t Marlin Brando.
Piffle, bts. If twenty of my serfs agree I am a fine master, I am surely justified in beating the one who says I’m not. I see it not as punishing an insolent lout but rather as rewarding the loyalty of my twenty consensus serfs.
There’s somethin’ wrong with that argument, mosomoso, but
serfs are uncertain as ter what.
“Can you imagine chemists arguing about how to measure success”
Yes.
Imagination should be your middle name, Mosher. Is it?
Steve, Your observations about synthesizing existing results is very good one. There is tremendous value in a clear and careful exposition of well known results. It is also an excellent teaching vehicle.
Steve, as an economic policy adviser covering a broad range rather than a narrow speciality, and having knocked about the world a bit and done a lot of things other than economics, I found that I had an ability to connect disparate fields and information and synthesize it, which to me was very valuable and meant that I provided much better advice. It’s hard to see connections if you’re not exposed to things outside your narrow field. So I support your suggestion.
My personal opinion is that science as practiced in universities is about to implode on its own administrative money centred bloat. What is being described at Kings is going on all over North America as the money pot gets smaller and smaller and the people at the top of the pyramid get bigger and bigger salaries but are left to “eat their young” just to stay alive. I predict real science will become the stuff of blogs like this until the implosion is over.
>My personal opinion is that science as practiced in universities is about to implode on its own administrative money centred bloat.
You mean like collage athletic departments? Have you compared head coach pay packages to what they pay in science departments.
>I predict real science will become the stuff of blogs like this until the implosion is over.
Just remember you are turning over stewardship of science to a virtual domain, the internet. It’s 99.9% digital, there is no guarantee any of this stuff would survive a Carrington event. Might want to have a backup plan.
Jack Smith
I would suggest you look at the Science 2.0 as a possible model on which to build the idea of blog based publishing. The comments mean a lot of garbage to wade through but the good stuff does shine out eventually. As for a Carrington event, that can be prepared for and data can be protected.
‘What you count is what you get.’
‘Well, we serfs always understood that doing science involved
measurement but this is ridiculous or monstrous or er, maybe
grotesque?
Supportable facts about this world we all live in. I hope.
-1
Obviously you have to define what science is first. Before you can attempt to measure it.
Andrew
science is already defined. with a measure, maybe we can figure out what definition they are using.
You know it cousin.
Excellent point. As that was what I was thinking in reading the article. I guess that means I fully endorse Dr. Curry’s reflection.
In the US it is now all about getting funding and publishing what the administration in power wants.
There needs to be a non-public climate research project, with the goals outlined by Dr Curry in her comment to me near the top of this post. We only need a few wealthy individuals to kick this off.
As far as I can tell, paywall journals contribute less than nothing to the process of disseminating research information. They provide negative ‘value added’, and so ought not exist. Like newspapers, journals have long since outlived their usefulness. They will eventually disappear, as they should, but it will be a slow and costly death…. both financially and socially. Governments could put them out of their suffering instantly by requiring that all published papers based on public funding be publicly available without cost. I expect it will (and should!) eventually happen, but I think not any time soon. Like the rapid crumbling of the ‘iron curtain’, it will be an end that seems too long arriving, but it will happen very fast when it starts.
In US biomedical research, this requirement has been in-place for the past five years stevefitzpatrick!
It works, too. PUBMED provides instant access to all US-funded biomedical research articles since 2009.
Nowadays methmaticians and physical scientists are joining the open-access movement.
Economists, not so much.
Good on `yah, NIH!
An experiment in open publishing is currently occurring on the Jo Nova website
It is being published in instalments and is really very interesting, both in content and design.
It is an interesting experiment, well worth following, and breaks the “Journal” mode of peer review and paywalling into itty-bitty pieces. Naturally this leaves the door open for nutty comments as well as constructive ones, but it is a very interesting venture
NOTE: open publication of the new hypothesis is not completed yet; I make no comment on the validity of the hypothesis at this point. For myself, I am content to follow the process through and then ask critical questions
Hi Judy,
“It is my impression that the more prestigious institutions pay less attention to such metrics, and rely more on peer review.”
Each university has its own culture. At Caltech the most important factor in a tenure vote is the reference letters. In addition, we read two or three of the candidate’s papers before we vote. The candidate’s funding level is excluded from the discussion and when I was the engineering dean I never discussed expectations for funding with the new professors.
In engineering, citations have major problems as a quality index because what is most important in the long run is the application in commercial products, military systems, or in scientific instruments.
Dave
Hi Dave, this confirms my impression. The engineering culture at different universities is interesting, some do seem to emphasize publications and citations rather than applications.
It depends on where in the pipeline the engineering research is. Some research is one step removed from commercial implementation, and some is several. The former should be measured by commercial potential, and the latter by other criteria.
More and more basic research, for example in nanotechnology and materials, is several steps removed from commercialization. The pot of gold is still there with these, but advances will enable more applied research, which in turn will enable commercial possibilities.
In the end, all engineering research is there to enable commercialization of something, but the distance between some of this basic research and something that makes money may be so great that nobody is going to fund it for the commercial potential. If a patent isn’t in the cards in 10 years or less, it’s not going to get the kind of funding that something that might could be in production in 5.
This isn’t really all that new, it’s just that the private sector isn’t funding this basic research like it once did. Killing Bell Labs, for example, killed a lot of privately funded basic research.
This is the dilemma Eisenhower was talking about.
The education system isn’t helping when a computer science professional gets ensconced as dean of the science and math department and suggests simulating all the labs. “Computers can do anything”. Even in my field of elementary particle physics some theoreticians are of the opinion that we don’t need any more measurements.
Charles
The recent editorial in the Economist is a good summary of some of the problems with the academic reward structure.
I’ve seen a lot of grant proposals and the successful grant getters at the best universities are very well paid by any standard. A lot of academics have their own companies and some of them have been quite lucrative. Now there is a lot of sweat equity in these companies, but they also benefit from the essentially free labor of graduate students who are very poorly paid and usually pretty bright and hard working.
The main problem I see with the grant system is just that there are “fashions” in research and usually grantors are really trying to pursue something fashionable that they can then sell to the real source of the money, the government. I actually have found that fundamental research usually suffers in this setting and “colorful” or “impactful” research prospers even if its real substance is nil. I’ve heard this complaint a lot in engineering from some really top people where the last decades fad was “design” which is in their view really about interfaces, visualization, etc. and not about fundamental understanding or improved methods. Some very top notch people have moved based on these kind of considerations even late in their careers. There are still some holdouts places that do real hard analysis research.
Another thing I have found in reviewing proposals is that there is a strong tendency to oversell the research and to put the very best face on the positive impacts. That’s a function of the extremely competitive environment. A lot of the literature from some of the “top” academics while not without merit is not replicatible by others at least in my experience. The Economist is excellent on the reasons for this too.
curryja @ June 16, 2014 at 6:51 pm,
I disagree with our hostess on this. If the ultimate aim is to take appropriate ‘action’, or not, then the question is poorly framed and JC’s answer is not addressing what is important and relevant, IMHO.
If we interpret “action” to mean implement policy to mitigate climate damages, then we need to focus on the objectives of that action. We need to define the objectives and then analyse the policy options. The policy selection and implementation needs to be managed like a project – by the discipline of project management http://www.pmi.org/PMBOK-Guide-and-Standards.aspx. That is, define the required project outcomes / result / we want as a first step.
To achieve policy success we have to understand the many constraints we must overcome for the policy to succeed in delivering the outcomes.
Science has a contribution to make, but it is just one among many. First, it needs to provide the relevant information that allows the policy analysts to estimate the costs and benefits and probability of success of the different policy approaches.
I suggest the policy analyst need just two inputs from climate scientists:
1. the PDFs of future climate by region (including PDFs for abrupt climate change, PDFs for the duration until it begins, duration of the abrupt change, and PDFs for the magnitude and direction of the change)
2. The PDF’s of damage function for the possible future climate climates, per region.
The latter is the area that seems to be least understood.
Peter Lang,
I may be wrong, but I suspect what Dr. Curry is calling for is a prerequisite for the type of research you call for. Climate scientists don’t understand the climate enough yet to give that kind of advice. So I think her call for focusing what money is spent on observations and basic research is spot on.
Climate science is in its infancy. I believe there is just not enough known to use what is currently being produced for serious long term policy analysis.
GaryM,
I agree with your comment. But, how can you do relevant and appropriate research if we don’t first define the policy objectives we need to address? We’ll just spend another 20 years researching this sort of nonsense: http://whatreallyhappened.com/WRHARTICLES/globalwarming2.html
GaryM
YOU know that climate science is in its infancy. I know that climate science is in its infancy. The trouble is that the Climate scientists don’t seem to know its in its infancy.
tonyb
A healthy infant is naturally full or curiosity, likes to wander about and poke its head and fingers into things.
Sad to say, our infant climate science won’t leave its room since we bought it a bloody computer.
When CERN was built, what were the “policy” objectives?
tonyb,
Ain’t that the truth!
Peter Lang,
“But, how can you do relevant and appropriate research if we don’t first define the policy objectives we need to address?”
Well, we know ACO2 is a potential problem. That is all we need to know to make an effort to see if we can actually gather enough data to have more than a predetermined WAG as to temps, or better yet total climate heat content. It also justifies research into the nature and interactions of the various forcings and feedbacks, as well as whether in fact the climate can be modeled, and if so, how. Because we sure don’t know how to yet.
Hell, pure human curiosity justifies those, as suggested by mosomoso and rls above. Just not on the idiotic scale we have been funding the institutionalized confirmation bias process that calls itself “climate science.”
But I think it has been, and will continue to be, an enormous waste of (our) money to keep funding incomplete and demonstrably wrong (for purposes of predicting heat rise) GCMs. Not to mention funding for bizarre statistical modeling global temperature from rings from dead trees, muck from the bottom of the ocean, and other sundry detritus to tell us what to charge for petroleum over the next 30 years. And then there’s deciding which industries to shutter and which massive boondoggles to fund.
All that should wait until we have a clue of how the climate actually reacts long term to increased CO2, if we ever do.
GaryM,
But we’ve been doing this stuff for several decades and not providing the information needed for policy analysis. We’ve been wasting most of it on completely irrelevant research which is being justified by cooking up some argument that it is relevant to climate change (see link in my previous comment). You are advocating for basic research and I am advocating for applied research. I agree some basic research is justified but the total funding for basic research should be divided up without any regard for the political ideologies that are prevalent at the time. Regardign applied research – i.e. for climate change – that should be directed to addressing the perceived issues and risks. Therefore, the objectives and desired outcomes of the research need to be clearly stated as a basis for awarding funding.
I reckon the balance of funding for climate science (which is applied research) is wrong. I think we need to put a much higher proportion of the total research effort into understanding impacts. Because we sure don’t know much about impacts yet. We sure know cooling is bad – in fact very bad – but we don’t know if warming is bad or good. We sure don’t know, do we?
As a certified and experienced project manager I can say that one of the first steps in forming a project is to evaluate the maturity of the science. As an example, consider a project with the initial objective of mitigating climate disasters. If it is determined that the science is not adequately mature, a decision could be made to introduce additional preliminary objectives that will mature the science as outlined by Dr Curry.
rls
Congratulations that you are a certified and experience project manager too. I expect you’d agree that the first step in project definition and initiation is to agree the project requirement/capabilities/outcomes/objectives/results. You’d also agree that these must be defined in measurable terms. The acceptance criteria and acceptance authority for the highest level of deliverables must be agreed.
Once this has been done we can proceed to start planning the project. We cannot begin until these are agree. What you have written is secondary to defining the project Scope (and deliverables).
Policy analysis, design and implementation can and should be conducted as a formal project or program (see definitions below) with multiple phases and components.
Therefore, IMO, it is a waste to continue throwing money at poorly directed research as the developed world has been doing with climate research for the past two or three decades. The research needs to be directed by what is needed for policy analysis.
IMO, the two main bits of information we need are,as I said in my first comment, PDFs on what the climate is likely to do and PDFs on the consequences of climate change. It is the latter (consequences of climate change) that has had much less work and we have little understanding (other than ideologically driven scaremongering). We know cooling is really bad. But we don’t know much about the consequences of warming. Is warming likely to be good or bad? We do know that warm and warming has been excellent for life in the distant past and also in the past 200 years or so.
PMBOK Definitions:
“Project: a temporary endeavour undertaken to create a unique product, service or result”
Program: A group of related projects managed in a coordinated way. programs usually include an element of ongoing work.
Peter Lang says:
‘It is a waste to continue throwing money at poorly directed
research as the developed world has been doing with
climate research for the past two or three decades.’
In the real world, problem directed innovators know this,
heck, even serfs know this! Guvuhmint financed institutions,
squandering other peoples’ money are seemingly unaware
of – this.
I agree with you but where we might differ regards the phase during which science maturity is evaluated. It is extremely important and can impact cost and schedule or even cause the project to be aborted. Successful project managers are cautious and would prefer aborting a project over having it continue using immature science. It is not unusual for projects to be aborted for this reason. From my experience, large projects are conceived in Washington and the first step you describe is done there. Then the project office takes over and begins, very early, to evaluate the maturity of the needed science/technology.
A measure of success is whether a scientist is employed as a scientist.
The problem with that assessment is that it merely supports the excuse that anyone who isn’t doing well within the system as it stands now is doing poorly simply because they aren’t good enough. That is a whip used to beat down and justify a lot of abuse in the system. That is the whip used to drive the endless postdoc cycle, for example. And there is no evidence for your assertion as far as I can tell.
My evidence is anecdotal: my own experience. I struggled for 10 years at post-doc level with a working husband and two young children. I did my job to fulfill the contract under which I was funded. I did not have time to think about advancing my career. I never thought of it as a career. I just liked doing calculations and thinking about relationships between the data. But no one gets paid doing that. There has to be a goal or a product. The one time I had the chance, most universities were looking for modelers (I am a satellite data person). The only satellite data jobs were for data managers (building databases). I do research at home, unpaid and very slow. Stephen McIntyre is my inspiration. I call myself a scientist. But without the institution behind the name, I am just kidding myself, right?
Nope you are not kidding yourself. For another inspiration, look to Nic Lewis (if you are unfamiliar with him, search his name on this blog). Also, search for ‘guest post’ on this blog, and you will see others who are doing research that is independent of an institution. I aspire to join your ranks (as soon as I can afford to).
It helps to have a day job. Hence the slow part, at least for me…..
Mi Cro My husband is a scientist with a teaching job but also does research in his spare time. My youngest son has now graduated high school and I am able to focus more on my work. Being independent also means running your own computer and database. With Linux OS, the costs are very low but the time investment is high.
Every university, college, department has its own culture. In my department, teaching is valued far more than research. “Number crunchers”, “go-getters”, and “money grubbers” are scoffed at. The only people who get their name etched in stone on the campus quad are those who receive teaching awards. Those who get funded are just supposed to be happy they got some extra cash.
What that means is that the students are taught the consensus. They do not measure anything because others have done the measurements for them. They never experience the thrill of discovery, because everything has been discovered elsewhere. As long as the teacher makes them feel happy, they are happy, and the teaching awards follow.
Beware of easy generalizations – or am I just shouting in the wind?
Thank you for eloquently articulating the “other side” as I have heard the arguments that the focus on research is forgetting what the “real’ role of universities is – teaching.
There is no perfect mix, but there should always be a mix!
The culture idea is correct. In my university, teaching was considered a punishment for those not good enough to get a lot of grants and thereby get exempted from teaching. The big grant grabbers were required to give one or two lectures a year and wow did they complain about the imposition. The ones who could not get grants were doing three, four even five full courses each year. Another university in our city placed the highest value on teaching. Very few professors were classical grant grabbers. Culture really is important.
If success is hard to measure, I think the trend now is to model it instead.
+1
and if you can’t model it with genuine rigor, maybe just pretend:
http://chronicle.com/blogs/percolator/the-magic-ratio-that-wasnt/33279
some strong parallels with Mannian statistics and the fervent resistance to consideration of McIntyre’s criticisms (I realize that proxy reconstructions may not fit some definitions of “model” but a conceptual parallel is there nonetheless):
If your predictions do not seem to come to pass, merely claim that simulations show that your predictions will prove to be true over timescales of many decades and that potential disasters will occur if your suggestions are not followed.
The measure of scientific success depends on who is holding the yardstick. At the bench level, for me it was impact. Did my work have an impact on my field? Ultimately, did my work change / shape / transform the field I worked in? Money was nice, but I was motivated by impact – still am, in fact.
When I headed a research institute, my measures changed. Now money was much more important – after all, I was now responsible for 70 professionals and about 120 students. Impact was still important, however – good work means a customer who comes back. But since my institute spanned much more than my field, I now had to look at proxies for impact. For example, did people come to us for work in our fields? Or, how competitive were we? – at the end of my tenure, we were hitting about 40-45% of our research proposals.
The University had somewhat different perspectives. They weren’t really concerned about the science – they were concerned about the scientific enterprise, i.e., they watched the money. And if there was some political benefit to be gained as well, that was another “Attaboy.”
As many have said, you get what you measure. The key is knowing what you’re going to do with the measurement. And that brings us to the brain controlling the hand that holds the yardstick.
Coming from a branch of physics where there are 2 tribes, experimentalists and theorists, I think a similar split should be considered for climate science. One tribe would be the observationists, people who measure things or take previous measurements, and derive some information about the current and/or past climate system. There may be the odd spat about proxies, but by and large peace and harmony would prevail.
The other tribe would be the theorists, who try to explain things. Only theorists indulge in warfare worthy of politicians.
Observationists can be judged on their previous and potential ability to add to our knowledge of what the climate system is and/or was. It should be relatively easy to judge this group from their publications and lecture notes.
Theorists generally think highly of themselves (less so of others), most being God’s gift to the subject. I would get the observationists to rank the theorists, as this would help to keep them “grounded”, and may help to prevent false consensusses.
The experimentalists also need funding for people, particle accelerators, satellites, etc.. Where does that funding come from? Hasn’t the source, in the past, been something other than grants? It appears that this is a case of initiating a project, the first phase of project management, of which I am familiar with; been there, done it.
If the concept of using citation counts (as metrics in research assessment) had been generally applied some seven decades ago, then I guess Trofim Lysenko would most likely have carried all the awards in genetics of his days.
Have you actually checked his stats? A lot of the ideas that were popular 70 years ago have turned out to be wrong. So what? We still have to make real time funding and promotion decisions. Or do you think scientists should not be paid? That would solve the decision problem.
Anyone who is working as a scientist should at least have enough funding to have a couple of students and be doing some basic research without having to get a grant. As it stands now in Canada a lot of scientists have a salary to do science but can’t do science because, for whatever reason, they have no grants. Meanwhile other “successful” scientists are not doing science. They have three dozen underlings doing the science while the paid scientist’s days are filled with endless writing of grants.
As much as I disagree with you w/r/t some aspects of the climate wars, David – I do appreciate that you do sometimes make comments like this one that challenge “skeptics” to apply due skeptical scrutiny to their logic.
Perhaps my point was not clear enough: practically every scientific article in genetics published in those days within the Soviet sphere of influence contained a Lysenko citation, as it might not have been seen as fit for publishing otherwise and, moreover, since it would have been dangerous for the author not to include one. So Lysenko was the most cited “scientist” of his period.
Lysenko rose to dominance at a 1948 conference in Russia where he denounced Mendelian thought as “reactionary and decadent” and declared such thinkers to be “enemies of the Soviet people”. His methods were not condemned by the Soviet scientific community until 1965, more than a decade after Stalin’s death.
Your point was clear, but I do not see how it applies to the issue at hand. As I asked, so what? We are not in the old Soviet Union.
Yes Joshua, it is interesting that people who do not want to see the economy restructured in the name of climate change want to see science restructured for no good reason at all.
Since “the economy” is a spontaneous order influenced by government policy, people who don’t like too much interference in the economy criticize government intervention. “Science” is likewise a spontaneous order, one that is even more influenced by government policy through control of funding. Of course people who think that the government policy is screwed up are going to want to change it. (That group includes plenty of insiders, such as Shirley Tighlman, biologist and former president of Princeton, but also people who are even more skeptical of the role of government in the system.)
frequena,
re: Lysenko and 1948
just a note on dates, Lysenko was already rising to dominance by the late 1920s/early 1930s:
http://en.wikipedia.org/wiki/Trofim_Lysenko
(I know that Wikipedia cannot always be trusted, but I think the chronology in this article is accurate)
Have been through this a couple of times in the US tech transfer sector and the best quote I have heard came from someone at the AUTM meeting a few years ago:
“People treasure what you measure”
Expect whatever metric you use to increase out of all proportion to any (and every) other measure of success.
Truth for its Own Sake, 101 : Knowledge for its own sake is provides not one wit of utility to society!
Except society has to decide which knowledge to pay to get. The US Federal basic research budget is about $60 billion per year. Congress decides how it will be spent, by specifying the programs that get funded and how much each gets. So this is not about knowledge for its own sake.
We need to divide this country up or go back to state’s rights so that ideologues cannot bankrupt the rest of us when they decide to spend the public’s money to land on the moon using nothing but wind power as the energy source.
From my experience congress determines budgets on the larger scale but it is government bureaucrats that determine exactly where the funds go.
Yes, RLS, they are called Program Officers. But they are answerable to Congress because their program can be reduced or even zeroed out. In some cases they are watched very closely. The point is that this is part of the complex decision system of democracy so society is making the funding decisions. It is not knowledge for its own sake. There is always a compromise between scientific value and social value.
I was a program officer. My office was far from Washington. We had to, by law, stay within the overall budget and use the right color of money. But there was never any second guessing from congress as to specific projects.
Taking the measure of science.
First one needs a suitable categorization of the human behavior and what it aims at before one can take its measure. The term “science” is far too vague a description. One can start with ANY suitable taxonomy, but one should start with a taxonomy. Over time that taxonomy can and will change because taxonomy itself is a tool for controlling behavior. And in the end while science might aim at “freedom of inquiry” this is mostly a platitude.
taking the measure of science then boils down to picking a taxonomy (any will do within reason ) and deciding metrics and methods of behavioral control or channeling.
Here is one to start with
http://judithcurry.com/2013/05/15/pasteurs-quadrant/#more-11619
There are three basic types.
Applied Research
Use Inspired research
Pure research
The metrics for Applied Research are brutal and easy to calculate.
Did your science behavior result in a product?
How much did it make?
The metrics for Use Inspired research are squishier.
A) did you actually advance understanding
B) how significant is the social problem you are working on.
Squishy.
The metrics for Pure research?
really squishy basically how far did you advance understanding.
If it has any use, you get major bonus points
Note the relationship between the value of free inquiry and squishyness
What’s missing is a metric for “how far” one advances understanding
You cannot leave out the cost… unless you’re investing your time and money. Money is stored labor and therefore a finite resource. Wasting money on projects that provide no value to society is not productive and not the way to maximize the net present wealth of a society. We don’t, for example, need to spend public money to fund vuvuzela classes in public schools.
Wagathon,
When considering science research it is possible to focus too much on cost. It’s one reason several universities RPI being the first I believe, developed Science & Technology based MBA programs. R&D is the life blood of technology companies, yet Finance and Accounting types see it as a drain on the bottom line.
I’ll argue the same applies to education. Insisting on tight cost controls is likely to have the impact of placing shackles and blinders on educational research.
Like the industrial-military complex years ago, it is now the government-education complex that is in need of serious downsizing.
for applied research cost is implicit in the profit question. dunce.
For the other two.
yes, cost would have to be part of the equation, however, perfect cost data
wont help you until you can “measure” “improving understanding”
How do we measure knowledge.
counting papers and cites is a poor proxy
How did CERN come about? Wasn’t it about pure research and hugely expensive? I suspect it required leaders with great influence.
Mosher, it is stupid to call someone a dunce in this context. Especially when you are wrong, as the US budget for applied research is several billion dollars a year, with no profit involved. Most of it is for weapon systems.
I think counting papers and citations is a very good proxy for the advancement of knowledge. Discoveries that no one knows about or uses are worthless. Knowledge is a social system.
RLS, atom smashers are a major item in all big country’s research budgets, thanks largely to nuclear weapons, which proved that nuclear science is important. In the US this is the job of the Energy Department’s Basic Energy Sciences program, which has a budget over one billion dollars a year. They funded the Atlas instrument on the Large Hadron Collider, which instrument cost over half a billion dollars.
Correction, the US budget for applied research is several hundred billion dollars a year, not a mere several billion (dwarfing basic research). Typing too fast and reading too slow.
“How did CERN come about? Wasn’t it about pure research and hugely expensive? I suspect it required leaders with great influence.”
Immaterial to the question.
The question is how to measure.
looking at what happened in the past may or may not tell you how to measure.
david
Now I understand why you made no progress.
“Mosher, it is stupid to call someone a dunce in this context. Especially when you are wrong, as the US budget for applied research is several billion dollars a year, with no profit involved. Most of it is for weapon systems.”
Wrong. Applied research in defense has huge profits.
You seem to misunderstand.
If the goal of applied research is to produce things we can use
then the most straight forward metric to start with is
A) did you your research get used in a product
How much profit.. WHICH MEANS YOU LOOK AT COST, DUNCE.
########################################
I think counting papers and citations is a very good proxy for the advancement of knowledge. Discoveries that no one knows about or uses are worthless. Knowledge is a social system.
more duncery.
A) If you want to argue that tree rings are a good proxy for temperature
you do so by comparing the two. So, to argue that number
of papers is good proxy for advancement in pure research you
have to have a measure of two things: the papers and the advancement. The obvious counter example is the seminal paper that
makes a major advance. dunce.
B) Discoveries that no one knows about dont exist.
C) Discoveries that have no use are PRECISELY the kinds of pure
research that are hard to measure. Not worthless. you might not
be able to monetize them, but the question is how do you measure
knowledge for its own sake. You can of course define that out of existence, but thats just changing the question.
No wonder you made no progress.
> I think counting papers and citations is a very good proxy for the advancement of knowledge.
Another way to count:
http://news.heartland.org/newspaper-article/2014/06/11/more-benefits-wisconsins-collective-bargaining-curbs
Does that count as +6 for School Choice research?
Mosherr: I asked about CERN because I thought it was for the squishy type of science, yet it apparently got funded by several governments and at large expense. How was the squishiness overcome? Perhaps a consortium of very influential physicists?
David: The Defense research dollars go mostly to defense contractors that make the profits.
“for applied research cost is implicit in the profit question”
Not necessarily
The usual liberal ad home reply belies ignorance: everything is profitable when money is free –e.g., as Irving could teach anyone on the Left if they actually had an open mind is that you can payback with fuel savings the ‘cost’ to flatten every railroad track in the US so long as there is no interest on the loan. The Left with its war on reason is costing the country in ways that can never be made whole again.
Mosher, I have no idea what you think a taxonomy will do for you as far as measuring scientific success, impact or progress is concerned. I do not think you understand the issue, nor the extensive work that is being done on it, and has been done over the last 60 years. Perhaps you should (gasp) read some of the papers, maybe even mine and my team’s. Then you could (choke) cite it.
David, I would certainly welcome a guest post on this topic, or a link to one of your previous blog posts that you think would be suitable here.
And the hits keep coming, regardless of funding source
.
http://contextearth.com/2014/06/17/the-qbom/
Keep on with your pity party
History provides plenty of examples of distinguished researcher/academicians who found ways to prosper outside the default treadmill of “teach undergraduates and write proposals”:
• Charles Ives / composer and insurance executive
• Nathaniel Bowditch / mathematician and actuary
• David E Shaw / structural biologist and trader
• Claude Shannon / engineer and investor
• Michael Spivak / mathematician and actuary
• Donald Knuth / computer scientist and author
• Benjamin Franklin / scientist and statesman
• Craig Venter / scientist and entrepreneur
• Si Ramo / engineer and industrialist
• James Harris Simons / mathematician and hedge fund manager
• Jane Goodall / primatologist and writer
As Arnold Shoenberg wrote of Charles Ives:
Not to mention, Isaac Newton pursued a career path that (literally) coined money!
Conclusion It is well for students (in particular) to keep in mind that there is no “one size and one style suits all” of academic achievement.
Add a couple more:
Seth Carlo Chandler : Astronomer and Actuary
Thomas Bayes : Statistician, philosopher, and Presbyterian minister
Nathan Myrhvold : CTO of Microsoft, photographer, chef, and Climate Scientist
I’ve seen this before in sales, and we recently saw it at the VA, low wait times to get a bonus, even if you have to have a secret list.
Hi Judy,
“The engineering culture at different universities is interesting, some do seem to emphasize publications and citations rather than applications.”
Believe me, if they had the application successes, they would emphasize them. However, Harold’s observation that some of the applications are a long time in the making is a fair one.
Dave
Dave, here is something that surprised me at GT. I recently led a multidisciplinary proposal to NSF (didn’t get funded) that was targeted at increasing the resilience of the electric grid in the face of weather disasters. The particular NSF program required extensive engagement with decision makers from govt and industry (in our case, regional power providers). I figured the electrical and industrial engineers on our team (modeling the electric grid) would be the ones with good contacts, but this was most definitely not the case (only one of the industrial engineers had contact with a regional energy provider). It turned out the people with the contacts were myself, an energy policy person and a project manager for an Institute.
sorry, but that proposal sounds like one of those things that just drives the lay person nuts. Granted, I don’t know the details, but from your abstract:
First, it’s weather that you’re hardening the grid for- “weather disasters” happen and need to be accounted for in the grid.
Second, requiring “extensive engagement with decision makers from govt and industry” isn’t a science pursuit, it’s policy that the NSF can help by doing it’s job and focusing on science and engineering questions. Those questions are “what breaks, why, how to fix it, how to avoid breaking it or speed up repair time and cost.” With that information, decision makers decide. And science needs to accept the fact that a mandate might not be the answer.
It sounds pedantic, but I believe this is most of the problem with the debate in climate. We’re stuck in an endless (26 years now) debate over how wrong the temperature projections are because one “team” wants to be able to say it’s bad enough to enact goofy policies.
Get the science funding back on science questions- yes, continue to try to improve weather forecasts, but more importantly figure out how to make nukes, windmills, solar panels, and bio fuels work and be honest about when they don’t. I don’t even think there’s (much) of a problem, but I’m more than happy to see science do science.
If you lived in Florida, or were impacted by Hurricane Sandy, you would get this. How can we minimize power outages exceeding 2 days from such storms? This has nothing to do with climate change.
” I figured the electrical and industrial engineers on our team (modeling the electric grid) would be the ones with good contacts,…”
Nice play on words there. The malfunction was caused by rust on the contacts.
It is unclear what benefit a power company would get from this “multidisciplined” approach beyond what information they already have available. Power companies (and those that distribute) have a wealth of historical data on why power outages have occured.
It seems pretty simple to do root cause analysis and a hardening of the system to prevent a reoccurance. I’d guess (with no reasonable information to support the conclusion) that most power outages are caused by poor maintenance and not following established safety procedures.
I live on the coast and was impacted by Sandy and Isabel.
The science questions were – with X storm surge and wind, what breaks, why, how do we harden it and/or get it fixed faster. The policy question is which of those things are cost-effective- note that this means in some areas you pay more for hardened power, some areas you have to expect to be on your own for a while, and some areas you aren’t allowed to build a house.
My area of the beach lost power very briefly because it was cost effective to bury lines and harden above-ground equipment. Friends were without power for days, but they always are during hurricanes because it was not cost effective to bury lines there. Hurricanes are part of life here.
Look back at your snow issue in Georgia- an science-sponsored extensive engagement with decision makers re “weather disasters” would be pointless. Research on cheap, fast modifications to existing DPW trucks and de-icing materials to mitigate snow would not be pointless. Ultimately, science can’t tell if it makes more sense to buy those things for the rare snow or tell the population to deal with it and spend the $$ on something else. No amount of time spent equating “snow” with “disaster” will change that.
On a related note, one other electrical grid reliability is cybersecurity. I’m moderately involved in the issue regarding electric grid, water systems, and other utilities, and am finding that there are certain parties who don’t want to talk about it, because it rains on their grand plans for “cloud” telemetry. You start arguing about this for a while, and you soon realize that some very big (Microsoft, IBM, etc.) toes are being stepped upon by raising these issues. They would rather sell “cloud” sizzle than security steak.
Usually when something like this doesn’t make sense, there’s somebody not too far away, with a lot of money on the line.
As far as the grid is concerned, all of this reliability and security stuff goes against what they would rather do with the capital. Like all engineers, they want toys, the more expensive, the more fun.
Did you reach out to alumni.
I know more than one GT alum in the electrical utility industry, including one of my brothers.
Rob,
RE: “Power companies (and those that distribute) have a wealth of historical data on why power outages have occured. ”
One might think so but is not the case. Very few power companies conduct root cause and failure analysis following storm / weather induced outages. Their primary objective is to get the system back up. That is not conducive to collecting evidence on cause of failure.
Poll a utility company on what they consider the leading cause of pole failures (which almost always lead to power outages) and they are likely to say storm damage. For a presentation last year I looked into our own data on pole replacement and was surprised to find the common consensus was wrong. Car hit poles and environmental degradation turned out to be the two leading causes of pole replacements due to “failure”. By far the leading cause of early replacement is public improvement projects.
Timing
Thanks for the education about the information maintained by power companies/power distribution companies. I am suprised that they do not have better information on failure analysis since it would seem to be a key component of long term costs and customer service. That data would seem more valuable to their decision makers than a study by a group of academics- (no offense to Judy)
Hi Judy,
“Dave, here is something that surprised me at GT.”
I think a lot of US EE departments have not maintained their connections to the electrical utilities well. It may be because the individual utilities have not funded much research.
Dave
More seriously, my understanding is that many high-status EE departments have shifted very far from thinking about power issues, substituting an emphasis on electronics and digital topics. Even back in the early 1980s, my best-in-his-class EE roommate had to study up on electric motors and electric power on his own before taking the GRE. All his coursework and independent research had been on signal processing, linear systems theory, etc. That emphasis came in handy when he later worked on advanced radar stuff and then became a patent attorney, but it suggests the department wasn’t too oriented to the electric power aspects of EE.
Rob,
I suspect the reason they don’t is that in the long run it accounts for a small portion of cost. A wood pole has an expected life of 50 years (this can vary by type of wood – we use Doug Fir. Utilities in the NW at one time used western red cedar. I’ve seen poles that are approaching 100 years in service). The average life is closer to 20 years. Meaning there is a very good chance you will end up replacing (or removing with undergrounding of facilities) the pole before it fails.
Another factor is utilities often will carry insurance against storm damage or they will have available the ability to the commission and obtain a rate increase to cover extraordinary storm restoration costs. Finally there is the point of priorities when poles fail during a storm. Getting the lights back on outweighs all other considerations. Try dealing with folks who are into their 2nd or even 3rd week without power. They ain’t baking you cookies.
Most utilities do have inspection and treat programs now, as early failure due to environmental conditions (rot, insects, woodpeckers) can be a significant cost.
I’d intended to put this comparison here, raising an issue of judging “future promise” vs. existing (minimal) quantity and influence at the time of a tenure decision…. it’s only one case, from outside the sciences/engineering, but I think it suggests that there should be university pathways for people displaying exceptional conceptual/intellectual promise, even if the quantity of publications is not (yet) there:
http://judithcurry.com/2014/06/16/what-is-the-measure-of-scientific-success/#comment-598734
This issue of metrics has a stark realization at Kings College London, where they are firing 120 scientists. The main criteria for the firing appears to be the amount of grant funding.
Measuring the value of scientists with a financial metric – that sounds logical. (NOT!)
What has become of the academic community when it resorts to such an irrational practice that turns science into an aspect of economics and sociology in this way? Evidently it can no longer have any idea of what the essential purpose of science is – the increase of human knowledge of reality. Surely the only rational criterion for evaluating the works of scientists is the extent to which they increase our knowledge of reality.
Although knowledge is a subjective mental quality that cannot be measured directly, it can be measured indirectly by behavioral metrics based on information theory. It is the task of psychologists, not sociologists or administrators, to create, develop and refine such metrics for application and use throughout all the sciences. Claude Shannon invented information theory in 1948 and some 66 years later the academic community still has not applied it to the objective evaluation of works of science and scientists. This is shameful surely.
A similar problem arises in private industry. For example, many large oil companies realize oil exploration is extremely risky. Many wells are “dry”. This encourages the exploration community to recommend wells and emphasize getting them approved rather than making sure the geology makes sense. There are perverse rewards for style and number of wells recommended as well as the quality of the slides and the sales ability.
A similar problem arises in new business development, where some “developers” come up with business failures but are rewarded merely for selling the idea.
I also saw that in the way research funding was allocated within a corporation. Some “pet ideas” held by supervisors were funded and anything unusual with a different slant was given a quick burial. I still have a couple of ideas I couldn’t get funded for some lab work and fine scale models of lab results. Cheap stuff, but I was looking at things upside down. And if it happened to me, it must happen much much more, after all I’m just a dabbler.
I would have to say not failing
Or maybe not
It becomes a vicious cycle when you work on a progressive project. Unfortunately, the bureaucracy slows down the research process. As a result, important findings are held up for years. It certainly starts to seem counterproductive.
This guy should sue.
From the article:
…
American University statistician tells The Fix: Belief in climate catastrophe ‘simply not logical’
If one would have asked statistician Caleb Rossiter a decade ago about global warming, he says he would have given the same answer that President Barack Obama offered at a recent commencement address.
“He castigated people who don’t believe in climate catastrophe as some sort of major fools,” Rossiter says of the president’s speech, adding he would have agreed with the president – back then.
But Rossiter would give a different answer today.
“I am simply someone who became convinced that the claims of certainty about the cause of the warming and the effect of the warming were tremendously and irresponsibly overblown,” he said in an exclusive interview Tuesday with The College Fix. “I am not someone who says there wasn’t warming and it doesn’t have an effect, I just cannot figure out why so many people believe that it is a catastrophic threat to our society and to Africa.”
For this belief – based in a decade’s worth of statistical research and analysis on climate change data – Rossiter was recently terminated as an associate fellow at the Institute for Policy Studies, a progressive Washington D.C. think tank.
…
http://www.thecollegefix.com/post/18034/
Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?