by Judith Curry
There is an interesting new paper in press in Behavioral and Brain Science that is generating substantial discussion in the blogosphere, entitled “Why do humans reason? Arguments for an argumentative theory.” Perhaps this article can provide us with some insights on the climate debate.
Why do humans reason? Arguments for an argumentative theory
Hugo Mercier and Dan Sperber
Abstract: Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found.
To be published in Behavioral and Brain Sciences. Link to full article [here].
Chris Mooney has a user friendly summary from the author, some excerpts below:
Current philosophy and psychology are dominated by what can be called a classical, or ‘Cartesian’ view of reasoning. Put plainly, it’s the idea that the role of reasoning is to critically examine our beliefs so as to discard wrong-headed ones and thus create more reliable beliefs—knowledge. This knowledge is in turn supposed to help us make better decisions. This view is—we surmise—hard to reconcile with a wealth of evidence amassed by modern psychology. Tversky and Kahneman (and many others) have shown how fallible reasoning can be. Others have shown that sometimes reasoning too much can make us worse off: it can unduly increase self-confidence, allow us to maintain erroneous beliefs, creates distorted, polarized beliefs and enables us to violate our own moral intuitions by finding excuses for ourselves.
Our theory—the argumentative theory of reasoning—suggests that instead of having a purely individual function, reasoning has a social and, more specifically, argumentative function.
However, for communication to be possible, listeners have to have ways to discriminate reliable, trustworthy information from potentially dangerous information—otherwise speakers would be wont to abuse them through lies and deception. One way listeners and speakers can improve the reliability of communication is through arguments. The speaker gives a reason to accept a given conclusion. The listener can then evaluate this reason to decide whether she should accept the conclusion.
If reasoning evolved so we can argue with others, then we should be biased in our search for arguments. In a discussion, I have little use for arguments that support your point of view or that rebut mine. Accordingly, reasoning should display a confirmation bias: it should be more likely to find arguments that support our point of view or rebut those that we oppose. Interestingly, the confirmation bias needs not be a drag on a group’s ability to argue. To the extent that it is mostly the production, and not the evaluation of arguments that is biased—and that seems to be the case—then a group of people arguing should still be able to settle on the best answer, despite the confirmation bias
Mooney’s take on this:
But individuals–or, groups that are very like minded–may go off the rails when using reasoning. The confirmation bias, which makes us so good at seeing evidence to support our views, also leads us to ignore contrary evidence. Motivated reasoning, which lets us quickly pull together the arguments and views that support what we already believe, makes us impervious to changing our minds. And groups where everyone agrees are known to become more extreme in their views after “deliberating”–this is the problem with much of the blogosphere.
In looking for other perspectives on this, I encountered an interesting post on lesswrong.com. From their post:
The paper defends reasoning as serving argumentation, in line with evolutionary theories of communication and signaling. In rich human communication there is little opportunity for “costly signaling”, that is, signals that are taken as honest because too expensive to fake. In other words, it’s easy to lie.
To defend ourselves against liars, we practice “epistemic vigilance“; we check the communications we receive for attributes such as a trustworthy or authoritative source; we also evaluate the coherence of the content. If the message contains symbols that matches our existing beliefs, and packages its conclusions as an inference from these beliefs, we are more likely to accept it, and thus our interlocutors have an interest in constructing good arguments. Epistemic vigilance and argumentative reasoning are thus involved in an arms race, which we should expect to result in good argumentative skills.
If reasoning is a skill evolved for social use, group settings should be particularly conducive to skilled arguing. Research findings in fact show that “truth wins”: once a group participant has a correct solution they will convince others. A group in a debate setting can do better than its best member.
The argumentative theory, Mercier and Sperber argue, accounts nicely for motivated reasoning, on the model that “reasoning anticipates argument”. Such anticipation colors our evaluative attitudes, leading for instance to “polarization” whereby a counter-argument makes us even more strongly believe the original position, or “bolstering” whereby we defend a position more strongly after we have committed to it.
These attitudes are favorable to argumentative goals but actually detrimental to epistemic goals. This is particularly evident in decision-making. Reasoning appears to help people little when deciding; it directs people to the decisions that will be easily justified, not to the best decisions!
However, it isn’t all bad news. The important asymmetry is between production of arguments, and their evaluation. In groups with an interest in finding correct answers, “truth wins”.
Becoming individually stronger at sound reasoning is possible, Mercier and Sperber point out, but rare. The best achievements of reasoning, in science or morality, are collective.
JC comments: I am trying to figure out what all this might mean in context of the IPCC consensus building process (hence the “brain sprain.”) The money quote in all this to me is this one:
If we generalize to problems that do not have a provable solution, we should expect, if not necessarily truth, at least good arguments to win. […] People are quite capable of reasoning in an unbiased manner at least when they are evaluating arguments rather than producing them and when they are after the truth rather than after winning a debate.
So it is easier to be unbiased when evaluating someone else’s argument than when making your own argument. I’m not sure I buy this (for a recent example, read Greenfyre’s analysis of my Polyclimate post.)
The other important conclusion is that:
A group in a debate setting can do better than its best member.
This conclusion supports the concept of the consensus building process. I think Mooney gets it right with this statement:
And groups where everyone agrees are known to become more extreme in their views after “deliberating”–this is the problem with much of the blogosphere.