Part 3 explores biases that lead to overconfidence. With all the heuristics and biases described above working against us, when we construct satisfying stories about the world, we vastly overestimate how much we understand about the past, present, and future.
The general principle of the biases has been this: we desire a coherent story of the world. This comforts us in a world that may be largely random. If it’s a good story, you believe it.
Insidiously, the fewer data points you receive, the more coherent the story you can form. You often don’t notice how little information you actually have and don’t wonder about what is missing. You focus on the data you have, and you don’t imagine all the events that failed to happen (the nonevents). You ignore your ignorance.
And even if you’re aware of the biases, you are nowhere near immune to them. Even if you’re told that these biases exist, you often exempt yourself for being smart enough to avoid them.
The ultimate test of an explanation is whether it can predict future events accurately. This is the guideline by which you should assess the merits of your beliefs.
We desire packaging up a messy world into a clean-cut story. It is unsatisfying to believe that outcomes are based largely on chance, partially because this makes the future unpredictable. But in a world of randomness, regular patterns are often illusions.
Here are a few examples of narrative fallacy:
Funnily, in some situations, an identical explanation can be applied to both possible outcomes. Some examples:
Even knowing the narrative fallacy, you might still be tempted to write a narrative that makes sense—for example, successful companies become complacent, while underdogs try harder, so that’s why reversion to the mean happens. Kahneman says this is the wrong way to think about it—the gap between high performers and low performers must shrink, because part of the outcome was due to luck. It’s pure statistics.
There are obviously factors that correlate somewhat with outcomes. The founders of Google and Facebook are likely more skilled than the lower quartile of startups. Warren Buffett’s experience and knowledge is likely a good contributor to his investing success, and you’d be more successful if you replicated it. The key question is - how strong is the correlation?
Clearly a professional golfer can beat a novice perhaps 100% of the time. However, skill is the dominant factor here - the correlation is very high, so predictability is very high. In contrast, if you took the management principles espoused in business literature and tried to predict company outcomes, you might find they predict little. The correlation between management principles and company outcomes is likely low, which then means that a company’s success or failure is likely not due to their management practices.
Be wary of highly consistent patterns from comparing more successful and less successful examples. You don’t know lots of things—whether the samples were cherrypicked, whether the failed results were excluded from the dataset, and other experimental tricks.
Be wary of people who declare very high confidence around their explanation. This suggests they’ve constructed a coherent story, not necessarily that the story is true.
Once we know the outcome, we connect the dots in the past that make the outcome seem inevitable and predictable.
Insidiously, you don’t remember how uncertain you were in the past—once the outcome is revealed, you believe your past self was much more certain than you actually were! It might even be difficult to believe you ever felt differently. In other words, “I knew it all along.” You rewrite the history of your mind.
Hindsight bias is a problem because it inflates our confidence about predicting the future. If we are certain that our past selves were amazing predictors of the future, we believe our present selves to be no worse.
Related to hindsight bias, outcome bias is the tendency to evaluate the quality of a decision when the outcome is already known. People who succeeded are assumed to have made better decisions than people who failed.
This causes a problem where people are rewarded and punished based on outcome, not on their prior beliefs and their appropriate actions. People who made the right decision but failed are punished more than those who took irresponsible risks that happened to work out.
(Shortform note: to push the logic further, this causes problems in the future for continuing success. People who got lucky will be promoted but won’t be able to replicate their success. In contrast, the people who made good decisions won’t be promoted and in the position to succeed in the future.)
A few examples of outcome bias:
The natural consequence of a reward system subject to outcome bias is bureaucracy - if your decisions will be scrutinized but the outcome is unpredictable, it’s better to follow rigid procedures and avoid risks. If you have proof that you followed directions, then even if your project ends up a failure, you won’t take the blame.
(Shortform note: antidotes to hindsight and outcome bias include:
Even when presented with data of your poor predictions, you do not tend to adjust your confidence in your predictions. You forge on ahead, confident as always, discarding the news.
Kahneman argues the entire industry of the stock market is built on an illusion of skill. People know that on average, investors do not beat market returns (by definition, since the market is an average of all traders in the market, this must be the case). And plenty of studies show that retail investors trade poorly, against best practices—they sell rising stocks to lock in the gains, and they hang on to their losers out of hope, even though both are exact opposites of what they should do. In turn, large professional investors are happy to take advantage of these mistakes. But retail traders continue marching on, believing they have more skill than they really do.
Here are many reasons it’s so difficult to believe randomness is the primary factor in your outcomes, and that your skill is worse than you think: