Misunderstanding cognitive bias

It’s great to see the enthusiasm with which cognitive biases are discussed on social media. Occasionally, though, enthusiasm gets in the way of accuracy, and an explanation takes hold even though it isn’t quite right.

The latest example comes via the (normally excellent) I Fucking Hate Pseudoscience site and Facebook community. The post “Understanding Bias- What colour is this truck?” starts off well, pointing out that our judgement of the likelihood of getting attacked by a shark is biased by a number of factors: sensationalism in the media, the fact that the media are global rather than local, and the individual’s unconscious assumption that global information reflects local risks.

The sentence “The mental shortcut we use by making this assumption is an example of a heuristic.” is ambiguous, because the previous paragraph mentioned a bunch of processes, not all of which count as heuristics, but I’m happy to give the benefit of the doubt so far.

In the next paragraph, I start to question if the article is actually about heuristics and biases. This is where the picture of a truck comes in:

Considering the limited information you have been given in the picture, the most unassuming and accurate answer you can give is that this side of the truck is yellow.  Your brain, however, automatically used the data to construct a bigger picture, and in that picture the entire truck is yellow.

This is inductive reasoning (not to be confused with mathematical induction): going beyond the evidence given, based on past knowledge. The photo of the truck is a 2D object, but I naturally see it as depicting a 3-dimensional scene, because I have experience with a 3D world. I understand that the road and kerb continue behind the truck and either side of the scene, even though only part of the road and kerb are visible. This is because we’re familiar with a world where visible objects tend to be solid and opaque. As the blog post correctly says, I also assume the whole truck is yellow because the part I can see is yellow. This is from having a familiarity with trucks, vehicles and solid objects generally.

Making the assumption is the heuristic, and the fact that you were blindly inclined towards an answer that could be wrong is the bias.

What’s happened here? The point that had just been made wasn’t about heuristics or biases. The Tversky & Kahneman work that started the heuristics and biases revolution wasn’t just observing that our sensory information is incomplete and that we naturally make inferences beyond it. That had been known for centuries, at least from Enlightenment philosophy (T&K’s Science paper: 1974. David Hume’s Treatise of Human Nature: 1739).

A heuristic is an information-processing short-cut that is simpler than working the answer out in full, but gives you a good enough answer, often enough. Dozens of different heuristics have been identified and named in scientific books and papers. For example, if you judge the frequency of shark attacks by how easily it is to remember a report of a shark attack, you’re using the availability heuristic.

In interpreting the photo, you’re making a judgement about how likely it is that a truck painted yellow on its left side is also painted yellow on the other side. This will be shaped by past experience: if every truck you’ve ever encountered were painted different colours on the left and right, of course the photo will not lead you to say the whole truck is yellow.

We’re not given a named heuristic that might affect this process, or any suggestion of a short-cut. If there were a process that didn’t involve generalising from past experience of trucks, but gave a similar answer, that would be a heuristic, but there’s no evidence here of such a process.

What about “bias”? The possibility of error isn’t bias. If it were, all inductive reasoning would be biased, since it inevitably involves the possibility of error. So being “inclined towards an answer that could be wrong” isn’t itself a bias.

Bias is systematic error, and it’s directional. For example, people often overestimate the frequency of murders but underestimate the frequency of suicides. That seems to result from a combination of media bias (suicides don’t get covered unless it’s a celebrity; murders get more coverage, sometimes in grisly detail) and psychological bias (the shocking detail makes murders more available, so the availability heuristic will lead to a predictable error).

What’s the bias in judging the colour of the truck? Do people overestimate, or underestimate, the likelihood that a truck is the same colour on both sides? Not knowing the percentage of trucks have left/right splits in their colour scheme, we can’t say.

Consider another kind of bias. Are people systematically overconfident in this case? Overconfidence takes up a big chunk of the Tversky & Kahneman books: it’s where people’s estimates of their chance of being right are wildly out of whack. For example, for a lot of questions where people are 98 or 99% confident of their answer, they are wrong about 40% of the time. The blog post doesn’t address how confident people are in their judgement about the truck. How surprised would they be if they found the truck was a different colour on the other side? We don’t know. There could possibly be an overconfidence or underconfidence bias, or neither. Without a scientific process involving the collection of data, there’s no reason to say that anyone’s response to the truck photo is biased.

My intent is not to warn you about some new “color” bias. The point is to provide a framework for understanding how heuristics work and why they lead to biases.

But the photo of the truck showed how inductive reasoning carries the possibility of error: the point Hume made in 1739. “How heuristics work and why they lead to biases” was touched on in discussion of shark attacks, but the blog post went straight on to using “heuristics” and “cognitive bias” in a different and confusing way.

Now I’ve got my main objections out of the way, let’s revisit this sentence about the image of a truck:

the most unassuming and accurate answer you can give is that this side of the truck is yellow.

That’s not unassuming: it assumes that the digital image on your screen is a photograph of a real truck. It’s easy as pie to change the colour of an object in a photograph. How about “the most unassuming and accurate answer you can give is that there are some yellow pixels on your screen”? But wait! Now you’re assuming that air does not change the colour of light that passes through it. So it would be better to say “the most unassuming and accurate answer you can give is that yellow light is reaching your retinas”. But wait! How do I know that retinas exist and are essential to vision? Aren’t I assuming those books and teachers are correct? We could go on. Any attempt to draw a line in the process of inductive reasoning and say “this is the most unassuming and accurate statement” is going to be arbitrary. In his novel Stranger in a Strange Land, Robert Heinlein gives the example of seeing a building from a long way away, and falls into exactly this error.

This is great, though:

Being aware of confirmation bias does not help you to avoid it, it’s part of your cognition, and if you want to make the best attempt at weeding it out of your conclusions, you’ll have to reach outside yourself, beyond your own cognition, and use a process designed to compensate, like the scientific method.

Amen to that!

The literature:

The “Understanding Bias” blog post has zero citations. I hope the enthusiasm for the topic will result in people picking up books which give the mainstream understanding of cognitive bias.

I’m not suggesting everyone has to read the big collections of papers edited by Tversky, Kahneman and colleagues. There are textbooks that digest this stuff and integrate it with more modern research, but can still be heavy going, such as Pohl et al.’s Cognitive Illusions, and Hardman’s Judgement and Decision Making. There are popular paperbacks which make this more accessible, such as Stuart Sutherland’s Irrationality, Cordelia Fine’s A Mind of its Own or Scott Plous’ The Psychology of Judgement and Decision Making. Daniel Kahneman has written a popular summary of his life’s work in Thinking, Fast and Slow, which is a good starting point for the issues here. There are many more recent books – of varying quality – but the ones mentioned I can recommend.

When it comes to blogs, I don’t regularly follow You Are Not So Smart, but as far as I’ve seen, its explanations of biases are excellent. If some of the text I’ve written is similar to the Wikipedia article on Heuristics in judgment and decision-making, that’s because I’m the main author of that article. It has 57 footnotes with pointers to the literature.

 

Right now the Facebook post has 810 shares and 1,300 likes. That’s thousands and thousands of people who think they have learned something from this example about heuristics and biases, and they’ve learned something, but it’s not what those terms were invented for.

Kudos to the blog author for responding to my criticisms on Facebook by inviting me to set down my objections.

Advertisements
  1. Understanding Bias – What color is this truck? | I fucking hate pseudoscience

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: