by Judith Curry
Mathbabe asks ‘Whom can you trust?’ and discusses trusting experts, climate change research, and scientific translators.
Mathbabe (very cool name for a blog) has an interesting series of posts related to trusting experts.
On Nate Silver
The first post is Nate Silver confuses causes and effect, ends up defending corruption. Excerpts:
Silver says:
This is neither the time nor the place for mass movements — this is the time for expert opinion. Once the experts (and I’m not one of them) have reached some kind of a consensus about what the best course of action is (and they haven’t yet), then figure out who is impeding that action for political or other disingenuous reasons and tackle them — do whatever you can to remove them from the playing field. But we’re not at that stage yet.
Mathbabe says
My conclusion: Nate Silver is a man who deeply believes in experts, even when the evidence is not good that they have aligned incentives with the public.
Call me “asinine,” but I have less faith in the experts than Nate Silver: I don’t want to trust the very people who got us into this mess, while benefitting from it, to also be in charge of cleaning it up.
From my experience working first in finance at the hedge fund D.E. Shaw during the credit crisis and afterwards at the risk firm Riskmetrics, and my subsequent experience working in the internet advertising space (a wild west of unregulated personal information warehousing and sales) my conclusion is simple: Distrust the experts.
Why? Because you don’t know their incentives, and they can make the models (including Bayesian models) say whatever is politically useful to them. This is a manipulation of the public’s trust of mathematics, but it is the norm rather than the exception. And modelers rarely if ever consider the feedback loop and the ramifications of their predatory models on our culture.
The truth is somewhat harder to understand, a lot less palatable, and much more important than Silver’s gloss. But when independent people like myself step up to denounce a given statement or theory, it’s not clear to the public who is the expert and who isn’t. From this vantage point, the happier, shorter message will win every time.
This raises a larger question: how can the public possibly sort through all the noise that celebrity-minded data people like Nate Silver hand to them on a silver platter? Whose job is it to push back against rubbish disguised as authoritative scientific theory?
It’s not a new question, since PR men disguising themselves as scientists have been around for decades. But I’d argue it’s a question that is increasingly urgent considering how much of our lives are becoming modeled. It would be great if substantive data scientists had a way of getting together to defend the subject against sensationalist celebrity-fueled noise.
There’s an easy test here to determine whether to be worried. If you see someone using a model to make predictions that directly benefit them or lose them money – like a day trader, or a chess player, or someone who literally places a bet on an outcome (unless they place another hidden bet on the opposite outcome) – then you can be sure they are optimizing their model for accuracy as best they can. And in this case Silver’s advice on how to avoid one’s own biases are excellent and useful.
But if you are witnessing someone creating a model which predicts outcomes that are irrelevant to their immediate bottom-line, then you might want to look into the model yourself.
A lawyer’s perspective
Law Professor Stephanie Tai responds to Mathbabe’s post Stephanie Tai on Deference to Experts. Excerpts:
So when you apply the claim that Cathy makes at the end of her post–”If you see someone using a model to make predictions that directly benefit them or lose them money – like a day trader, or a chess player, or someone who literally places a bet on an outcome (unless they place another hidden bet on the opposite outcome) – then you can be sure they are optimizing their model for accuracy as best they can. . . . But if you are witnessing someone creating a model which predicts outcomes that are irrelevant to their immediate bottom-line, then you might want to look into the model yourself.”–I’m not sure you can totally put climate scientists in that former category (of those that directly benefit from the accuracy of their predictions). This is due to the nature of most climate work: most researchers in the area only contribute to one tiny part of the models, rather than produce the entire model themselves (thus, the incentives to avoid inaccuracies are diffuse rather than direct); the “test time” for the models are often relatively far into the future (again, making the incentives more indirect); and the sorts of diffuse reputational gains that an individual climate scientist gets from being part of a team that might partly contribute to an accurate climate model is far less direct than the examples given of day traders and chess players or “someone who literally places a bet on an outcome.”
What that in turn seems to mean is that under Cathy’s approach, climate scientists would be viewed as in the latter category—those creating models that “predict outcomes that are irrelevant to their immediate bottom-line,” and thus deserve people looking “into the model [themselves].” But at least from what I’ve seen, there is *so* much out there in terms of inaccurate and misleading information about climate models (by folks with stakes in the *perception* of those models) that chances are, a lay person’s inquiry into climate models has high chance to being shaped by similar forces with which Cathy is (in my view appropriately) concerned. Which in turn makes me concerned about applying this approach.
So what’s to be done? I absolutely agree with Cathy’s statement that “when independent people like myself step up to denounce a given statement or theory, it’s not clear to the public who is the expert and who isn’t.” It would seem, from what she says at the end of her essay, that her answer to this “expertise ambiguity” is to get people to look into the model when expertise is unclear. But that in turn raises a whole bunch of questions:
Given the high degree of training it takes to understand any of these individual areas of expertise, and given that we encounter so many areas in which this sort of deeper understanding is needed to resolve policy questions, how can any individual actually apply that initial exhortation–to look into the model yourself–in every instance where expertise ambiguity is raised? Expert reliance isn’t perfect, sure–but it’s a potentially pragmatic response to an imperfect world with limited time and resources.
Do my thoughts on (3) mean that I think we should blindly defer to experts? Absolutely not. I’m just pointing it out as something that weighs in favor of listening to experts a little more.
So how to address this balance between skepticism and lack of time to do full inquiries into everything? I totally don’t have the answers, though the kind of stuff I explore are procedural ways to address these issues, at least when legal decisions are raised–for example,
* public participation processes (with questions as to both the timing and scope of those processes, the ability and likelihood that these processes are even used, the accessibility of these processes, the susceptibility of “abuse,” the weight of those processes in ultimate decisionmaking)
* scientific ombudsman mechanisms (which questions of how ombudsman are to be selected, the resources they can use to work with citizen groups, the training of such ombudsmen)
* the formation of independent advisory committees (with questions of the selection of committee members, conflict of interest provisions, the authority accorded to such committees)
* legal case law requiring certain decisionmaking heuristics in the face of scientific uncertainty to avoid too much susceptibility to data manipulation (with questions of the incentives those heuristics create for actual potential funders of scientific research, the ability of judges to apply such heuristics in a consistent manner)
Mathbabe responds
A follow on post by Mathbabe is titled On trusting experts, climate change research, and scientific translators. Excerpts:
Stephanie asks three important questions about trusting experts, which I paraphrase here:
- What does it take to look into a model yourself? How deeply must you probe?
- How do you avoid being manipulated when you do so?
- Why should we bother since stuff is so hard and we each have a limited amount of time?
People: I’m not asking you to simply be skeptical, I’m saying you should look into the models yourself! It’s the difference between sitting on a couch and pointing at a football game on TV and complaining about a missed play and getting on the football field yourself and trying to figure out how to throw the ball. The first is entertainment but not valuable to anyone but yourself. You are only adding to the discussion if you invest actual thoughtful work into the matter.
Another thing about climate research. People keep talking about incentives, and yes I agree wholeheartedly that we should follow the incentives to understand where manipulation might be taking place. But when I followed the incentives with respect to climate modeling, they bring me straight to climate change deniers, not to researchers.
Do we really think these scientists working with their research grants have more at stake than multi-billion dollar international companies who are trying to ignore the effect of their polluting factories on the environment? People, please. The bulk of the incentives are definitely with the business owners. Which is not to say there are no incentives on the other side, since everyone always wants to feel like their research is meaningful, but let’s get real.
I like this idea Stephanie comes up with:
Some sociologists of science suggest that translational “experts”–that is, “experts” who aren’t necessarily producing new information and research, but instead are “expert” enough to communicate stuff to those not trained in the area–can help bridge this divide without requiring everyone to become “experts” themselves. But that can also raise the question of whether these translational experts have hidden agendas in some way. Moreover, one can also raise questions of whether a partial understanding of the model might in some instances be more misleading than not looking into the model at all–examples of that could be the various challenges to evolution based on fairly minor examples that when fully contextualized seem minor but may pop out to someone who is doing a less systematic inquiry.
This raises a few issues for me:
- Right now we depend mostly on press to do our translations, but they aren’t typically trained as scientists. Does that make them more prone to being manipulated? I think it does.
- How do we encourage more translational expertise to emerge from actual experts? Currently, in academia, the translation to the general public of one’s research is not at all encouraged or rewarded, and outside academia even less so.
- Like Stephanie, I worry about hidden agendas and partial understandings, but I honestly think they are secondary to getting a robust system of translation started to begin with, which would hopefully in turn engage the general public with the scientific method and current scientific knowledge. In other words, the good outweighs the bad here.
JC comments: I can’t remember how I managed to come across these posts, but I thought they were pretty interesting and they raise some good topics for us to discuss.
