One of the great mysteries of life is why smart people do stupid things. Intelligence is a strangely ineffective barrier to silliness, and one of the things that makes our industry so entertaining is that it continuously supplies examples.
An interesting contribution to this topic comes from Stephen Fleming, a Professor of Cognitive Neuroscience at University College London, who recently published ‘Know Thyself: The New Science of Self-Awareness’. Self-awareness, in this case, means metacognition (cognition about cognition), or the study of how well we really know what we think we know. In other words, it’s the study of how people estimate their confidence in what they know, or in how well they will perform – or have performed – certain tasks. In Fleming’s words, “metacognition is most useful when we’re doing stupid things”.
We use our metacognitive circuitry when we make shopping lists, for example, anticipating that our memory might fail; and when those of us whose eyesight has deteriorated correctly conclude that it isn’t the world that is getting more blurry. This ability to think about the confines of our abilities is a remarkable feature, unavailable to most members of the animal kingdom, and in fact unavailable to humans until the age of about four, as people with a toddler will attest. However, we are not perfect at it, especially under uncertainty. As Daniel Kahneman and Amos Tversky documented, under uncertainty most people exhibit a significant decoupling between confidence and capability. Partly, this is because our brains are trained to make inferences from imperfect information. Much of the time, what we ‘know’ is essentially made up, or (more charitably) an educated guess. Our level of confidence depends on the picture we’ve constructed in our minds, but our actual performance will only be revealed when our guesses collide with reality.
The prevailing theory is that we have inherited this inference habit from our tribal, hunter-gatherer past, when we had to create narratives and models to explain other people’s behaviours from limited clues. We also had to explain our own behaviour to others (sometimes opting for deception), and to transmit effectively information we possessed (essential in a collaborative activity such as a hunt). Our brains therefore evolved to continuously ruminate on the behaviour of others, and to ruminate on our own behaviour.
The mental machinery that produces these explanations about the world is so powerful that it doesn’t stop when we stop talking. In addition to trying to get into others’ heads, we continuously model our own behaviour and create narratives to explain it. These narratives aren’t always optimised for truth; rather, they are subject to biases and slaves to emotions. Sometimes they cohere more or less with reality, but sometimes they don’t. In the extreme, when they decouple from reality completely, they lead to psychosis and confabulation.
Despite Kahneman’s findings, we still intuitively want to believe that intelligence has something to do with confidence and self-awareness. But reason alone doesn’t seem to improve metacognition. People with strong reasoning powers (those that do well in IQ tests) tend to use them to become better ‘advocates’ for their theories and emotions, jumping through increasingly difficult hoops to justify their own narratives. Until recently we could merely observe this phenomenon. But Stephen Fleming gets us closer to satisfactorily explaining why it happens, by finding that the part of the brain that deals with metacognition is quite distinct from the part of the brain that we engage for ‘brute-force’ reasoning. Put another way, the cognitive resources we engage to take an IQ test are not the same ones that we engage to assess how well we’ve done in that test.
This discovery helps us recognise situations when our self-awareness (or someone else’s) will fail. We’ve all had the feeling of being bamboozled by a person of superior intelligence. When this happens, we should at the very least ask whether their confidence might be misplaced. A good way to do this is to ask the person displaying high confidence how much of their wealth they would wager on their assertions being true. The question is designed to awaken the metacognitive part of the brain, and in most people it demands that the refinements of probability come online.
Encouragingly, the fact that metacognitive circuits are distinct from intelligence circuits means that metacognition can be trained. Scientists (good ones) know how rare it is to know something with near-100% certainty, so over time they learn to hedge, qualify and caveat everything they say. This can lead to extreme cases of what’s known as epistemic humility (humility about knowledge), which in turn leads to potentially disastrous misunderstandings between scientists and the general public, because the public mistakes this humility for lack of knowledge. But the fact that scientists learn to speak this way shows they have calibrated their metacognition over time.
Similarly, there is the Dunning-Kruger effect, which represents the inverse correlation that exists between statements of confidence and knowledge. We are all susceptible: we begin learning about something and are inexplicably confident about it; but as we learn more and become acquainted with a subject’s complexities, nuances and conditionalities, our confidence recedes. That makes us sound less knowledgeable, but the opposite is true. The Dunning-Kruger effect is captured in popular phrases like ‘a little knowledge is a dangerous thing’ (which Google tells us was initially written by Alexander Pope) and ‘I fear the man who has read just one book’ (Thomas Aquinas, apparently). Nevertheless, the fact that a person can recalibrate their assessment of their own knowledge by learning more proves that metacognition is trainable through feedback. A lesson here is that it might be a good idea to withhold strong opinions during the initial phase of learning.
Third, Fleming finds that stress is about the most detrimental ingredient for metacognition. It causes big changes in the frontal cortex of the brain, and the increased tunnel-vision it triggers suppresses our ability to take a reflective view of ourselves. This is a shame because stressful situations are when we need our full judgment powers. In experiments, Fleming tests the value of obtaining a third-person perspective in such cases, finding it of considerable power and therefore endorsing the ask-a-friend technique of seeking the opinion of people we trust, especially if they’re emotionally detached from a problem. While most stressful situations preclude us from doing this in the moment, we can apply this analysis retrospectively and learn from it. We should also learn to be more tolerant of other people’s behaviour under stress.
Fourth, we have metacognitive blind spots, sometimes at the societal level, where the mere broaching of a topic elevates stress and defensiveness. The topics capable of arousing such responses change with the times, but it would help the general conversation to recognise in advance what they are. On which point, climate scientist Helen Fisher has conducted studies into how people rate their confidence on climate information, finding that their metacognitive abilities are significantly worse on climate science than they are on other scientific topics. It seems that the emotive nature of climate change numbs our capacity to realise we might be wrong. You could easily imagine the same applying to vaccines (see Spotify’s recent problems with Joe Rogan) and conversations about race.
The practical applications of metacognitive science to the world of investments are numerous. Investors live in a permanent tug-of-war between humility and confidence, and their confidence can be directly observed through portfolio weightings. One simple way to tell whether a fund manager is ‘well-calibrated’ from a metacognitive perspective, therefore, is whether they have higher conviction (i.e., higher weights) when they are more likely to be right, and lower conviction when they are more likely to be wrong.
There is another particularly relevant point in the book: narrating to others exposes what we don’t know about a topic by summoning our metacognitive powers. One of the best RFP questions we’ve received was along the lines of ‘how would you explain your strategy to a layman who is smart but not a finance professional?’ The question works because having to describe something exposes the gaps in a person’s knowledge. In our team, we insist on writing things down and debating them in a calm, considered setting. We also never act on a whim. We’ve never quite thought about it this way, but we’ve been using narration to summon metacognition, forcing our confidence to match our actual knowledge. It’s a natural route to recalibration, and one which some people figured out over a century ago. Albert Einstein, for example, was famous for saying “if you can’t explain it to a six-year old, you don’t really understand it yourself”.