by Judith Curry
“Universities, then, are doing the research. Governments, and their public services, want the evidence. Why is it so difficult to get these two worlds to meet at an intersection of knowledge that can influence in significant ways the making of public policy? Why does Australia’s large public investment in research and development contribute so little to addressing the political response to the nation’s economic and social challenges?” – Peter Shergold
Future Directions for Scientific Advice in Whitehall is a volume that was assembled by Robert Doubleday of Cambridge University and James Wilsdon of the University of Sussex on the occasion of Mark Walport taking over as the UK government’s chief scientific adviser.
The volume covers issues of broad relevance to science advice for government. I’ve excerpted some gems that I found to be particularly insightful:
Perhaps the most important finding of almost all research on this topic is that demand matters as much as supply. The most brilliant advice may go wholly unheeded if it’s not fitted to the social context of decision makers, the psychology of people making decisions in a hurry and under pressure, and the economics of organisations often strapped for cash. What works for whom and in what circumstances are crucial factors; and evidence and advice have to make themselves useful if they are to be used.
So how should advisers raise the odds of having impact – and of being useful? In my experience, the successful ones understand two fundamental aspects of the context in which their advice will be heard, both of which are radically different from the cultures they are likely to have experienced for most of their careers outside government.
The first is that they are operating in a context where there are often multiple goals and conflicting values. As a result, there may often not be a single right answer (though there may be any number of demonstrably wrong answers). Instead there will be right answers that are more or less aligned to the priorities of government (and of the public). The better the providers of advice understand decision makers’ perspectives and needs the more likely they are to be influential.
Take energy. I twice had to oversee reviews of energy policy and in each case the scientific analysis of such things as potential energy sources, current and future renewables or carbon scenarios, had to be linked to the very different goals of ensuring affordable energy, energy security, and protecting the world from catastrophic climate change. Scientific method cannot tell us which of these goals is more important. This is a matter for judgement and wisdom – and as the study of wisdom tells us, wisdom tends to be context-specific, rather than universal like natural science.
The second vital, but not always obvious, point is that governments have to deal with multiple types of knowledge. A minister making decisions on a topic such as the regulation of pesticides or badger culls may need to take account of many different types of knowledge each of which is provided by a different group of experts. These include: evidence about policy, such as evaluations of public health programmes; knowledge about public opinion, and what it may or may not support; knowledge about politics, and the likely dynamics of party or parliamentary mood; intelligence, whether human or signals; statistics; economics; history; knowledge about Civil Service capacities; and performance data, for example on how hospitals or police forces are doing.
Trump cards and clever chaps
Formal scientific knowledge sits alongside these other types of knowledge, but does not automatically trump the others. Indeed, a politician, or civil servant, who acted as if there was a hierarchy of knowledge with science sitting unambiguously at the top, would not last long. The consequence is that a scientist who can mobilise other types of knowledge on his or her side is likely to be more effective than one that cannot; for example, by highlighting the economic cost of future floods and their potential effect on political legitimacy, as well as their probability.
These points help to explain why the role of a chief scientific adviser (CSA) can be frustrating. Simply putting an eminent scientist into a department may have little effect, if they don’t also know how to work the system, or how to mobilise a large network of contacts. Not surprisingly, many who aren’t well prepared for their roles as brokers, feel that they rattle around without much impact.
For similar reasons, some of the other solutions that have been used to raise the visibility and status of scientific advice have tended to disappoint. Occasional seminars for ministers or permanent secretaries to acclimatise them to new thinking in nanotechnology or genomics are useful but hardly sufficient, when most of the real work of government is done at a far more junior level. This is why some advocate other, more systematic, approaches to complement what could be characterised as the ‘clever chap’ theory of scientific advice.
First, these focus on depth and breadth: acclimatising officials and politicians at multiple levels, and from early on, to understanding science, data and evidence through training courses, secondments and simulations; influencing the media environment as much as insider decision making (since in practice this will often be decisive in determining whether advice is heeded); embedding scientists at more junior levels in policy teams; linking scientific champions in mutually supportive networks; and opening up more broadly the world of evidence and data so that it becomes as much part of the lifeblood of decision making as manifestos. Here the crucial point is that the target should not just be the very top of institutions: the middle and lower layers will often be more important. A common optical mistake of eminent people in London is to overestimate the importance of the formal relative to the informal, the codified versus the craft.
Second, it’s vital to recognise that the key role of a scientific adviser is to act as an intermediary and broker rather than an adviser, and that consequently their skills need to be ones of translation, aggregation and synthesis as much as deep expertise. So if asked to assess the potential commercial implications of a new discovery such as graphene; the potential impact of a pandemic; or the potential harms associated with a new illegal drug, they need to mobilise diverse forms of expertise. Their greatest influence may come if – dare I say it – they are good at empathising with ministers who never have enough time to understand or analyse before making decisions. Advisers who think that they are very clever while all around them are a bit thick, and that all the problems of the world would be solved if the thick listened to the clever, are liable to be disappointed.
Institutions that play a watchdog role in society offer a persistent challenge for democracy: who shall watch the watchers? We shrink at the thought of unlimited police power or judges who place themselves above the law. Scientific advice is not immune to such concerns. Its role is to keep politicians and policymakers honest by holding them to high standards of evidence and reason. Arbitrary and unfounded decisions are anathema to enlightened societies. But who ensures the rationality of science advisers, making sure that they will be held accountable for the integrity of their advice?
That question may seem too trivial to be worthy of serious consideration. Aren’t science advisers accountable at the end of the day to science itself? Most thoughtful advisers have rejected the facile notion that giving scientific advice is simply a matter of speaking truth to power. It is well recognised that in thorny areas of public policy, where certain knowledge is difficult to come by, science advisers can offer at best educated guesses and reasoned judgments, not unvarnished truth. They can help define plausible strategic choices in the light of realistic assessments of evidence; rarely can they decree the precise paths that society should follow. Nonetheless, it is widely assumed that the practice of science imposes its own discipline on science advisers, ensuring that they are bound by known facts, reliable methods, responsible professional codes, and the ultimate test of peer review. Seeing their role as apolitical, science advisers are not inclined to introspection in situations where their work fails to persuade. It seems more natural to blame external factors, from public ignorance and media distortion to the manipulation of science by powerful corporate funders or other large interest groups.
STS scholarship, backed by detailed studies of science advice in action, has come to almost the opposite conclusion: that better science advice requires more intelligent engagement with politics. This observation may initially sit uncomfortably with advisers but should in the end lead to more accountable uses of their knowledge and judgment. The most relevant findings from STS research can be summarised as follows:
• First, ‘regulatory science’ (the science most relevant to policy) does not simply exist as such in the outside world but rather is the output of advisory processes which are themselves loaded with value judgments, often in a form that social scientists call ‘boundary work’: for example, which facts and disciplines are relevant; when is new knowledge reliable enough for use; which dissenting viewpoints deserve to be heard; and when is action appropriate, even if not all questions are answerable on the basis of available knowledge. Accordingly, science advice can never stand wholly aloof from politics. The problem is how to manage its boundary-straddling role without compromising scientific integrity.
• Second, public refusal to accept the judgment of science advisers does not reflect intellectual ‘deficits’ on the public’s part but rather the failure of decision making processes to resolve underlying questions of responsibility: for example, who will be monitoring risky new technologies after they have been released into the market, and who will pay if the consequences are unintended but harmful? Science advisers may consider these issues outside their remit, but publics have good grounds to believe that experts will take note of these contextual factors when they advise policymakers on matters of risk and safety.
• Third, science advice often tracks the promises and practices of science itself, attaching disproportionately greater value to what is known or can be learned than to what is unknown or outside the reach of the advisers’ immediate consciousness. That tendency leads in turn to a relative disfavoring of hard-to-gather social and behavioral evidence, as compared to measurable facts about the natural world. It also makes the process of science advice inattentive to hierarchies of power and money, not to mention to cultural biases and global resource inequalities, which shape the problem framings and methods of investigation that scientists bring to bear on social problems.
• Fourth, science advice partakes of, and to some degree reproduces, salient features of a nation’s or region’s political culture, including a society’s relative weighting of experts’ technical knowledge, personal integrity and experience, and capacity to represent significant viewpoints in society.7 In turn, those ingrained but on the whole invisible cultural preferences may affect an advisory system’s own resilience and ability to learn from its past mistakes and false turns.
Science advice has become a vitally important site of knowledge creation in modern societies, a site in which knowledge combines with wisdom to everyone’s benefit. It is time for science advisory systems to recognise that – to stay honest – they too need critics from the communities of research studying how knowledge and action are linked together. In democracies, no institutions of power should be beyond critique. If judges may not presume to stand above the law, still less should science advisers seek to insulate themselves from the critical gaze of the sciences of science advice.
It is obvious how scientific advice ought to get incorporated into policy. Scientists should be called on to set out the implications of the latest discoveries or technologies, while policymakers, frowning with concentration, listen attentively, ask astute and penetrating questions, and then put together a policy firmly rooted in evidence.
It is equally obvious that this is not what happens. The process is nonlinear, sometimes generating policies that have scant regard for evidence, with occasional breakthroughs, and many delays or cul-de-sacs. Yet, oddly, instead of standing back and trying to explain what is observed in practice and using that practical understanding to create better processes, systems are created based on what ought to work. Scientists bemoan this messy system and insist that, if only it were rational and linear, how much better it would be. It is ironic that an area of human endeavour that is based on positive analysis should find itself making normative proposals. Before suggesting how the system ought to work it would be worth applying the scientific method to understanding how scientific advice gets incorporated into policy.
Jack Stilgoe and Simon Burall
The most important conclusions of the Phillips Inquiry as they relate to the question of openness are that:
- Trust can only be generated by openness.
- Openness requires recognition of uncertainty, where it exists.
- The public should be trusted to respond rationally to openness.
- Scientific investigation of risk should be open and transparent.
- The advice and reasoning of advisory committees should be made public.
Openness, according to Phillips, is not just about transparency. It also, crucially, is about being open-minded. Opening up expert advice means paying attention to scientific uncertainties, rather than obscuring them. It means opening up the inputs to scientific advice (who is allowed to contribute, how and on what terms?). And it means changing the outputs from advice, such that they do not offer single prescriptions but rather help to inform the range of available policy options.
Old model of expertise:
- Demanding public trust
- Expecting expert consensus and prescription
- Managerial control
- Presenting the evidence
New model of expertise:
- Trusting the public
- Expecting plural and conditional advice
- Distributed control
- Presenting evidence, judgement and uncertainty
Over the last 30 years, policymakers have rethought the contribution that publics and experts can make to policymaking. With both, we can see that the word ‘open’ is not straightforwardly defined. Are we talking about open doors, welcoming in new perspectives; open minds, reflecting on the limits of centralised control and predictability; or transparent but closed windows, revealing policy but maintaining control of its contributors?
If we adopt the instrumental rationality of ‘evidence-based policy’, we can tie ourselves in knots trying to work out how expertise, evidence and public inputs should all be ‘balanced’ as we assemble a justification for policy action. If however, we relax this view, and recognise that policy is often messy, surprising and responsive – what Charles Lindblom memorably called ‘muddling through’- then we can identify more constructive, sympathetic roles for these plural inputs. They all, in their way, help us make sense of the many dimensions of issues.
JC comments: This volume fortuitously became available the week before my Congressional testimony. I tried to operate under the ‘new model of expertise’ and to be as non-normative as possible (recall this previous post Congressional testimony and normative science.) I hope that my testimony was convincing to the Republicans, and helps move them to a more defensible and rational position on climate science. Then hopefully we can put the debate on climate policy back into the sphere of politics and economics, where it belongs. That was the ‘agenda’ in my testimony.