Confirmation bias is the bias to seek for, interpret and remember information in ways that confirm our existing beliefs rather than genuinely test them. In general, it’s an irrational preference for information that matches our expectations. This is one of the first biases I learned about, but recently I’ve been reading up on it in a more systematic way. I’m putting my notes direct into Wikipedia rather than improve my own site.
In what I’ve learned, there’s a massive irony that I’m surprised isn’t commented on. The term “confirmation bias” comes from the original pair of experiments from the 1970s by Peter C. Wason. Since and because of them, it has become widely accepted that subjects seek to confirm their working hypotheses rather than subject them to falsification.
However, those experiments didn’t prove the existence of a confirmation bias. There were logical errors in the interpretation of the results, pointed out especially in a 1987 Psychological Review paper by Joshua Klayman and Young-Won Ha which is one of my all-time favourite academic papers (see the Wikipedia article for refs). Subsequent research has found genuine confirmation biases, but they’ve turned out to be specific to particular situations, rather than ubiquitous. When testing a hypothesis, people often seem to prefer a genuinely diagnostic strategy.
Despite this critique, there is still a lot of psychological writing that takes the Wason experiments as proving the reality of confirmation bias. Even Sutherland does so in “Irrationality”, his outstanding paperback round-up of bias research.
So why were these experiments accepted so easily as proof when, for a long time, the evidence was inadequate? Because it fit with expectations built up from informal observation: in other words, a clear case of confirmation bias.