Humans have to make decisions from complicated datasets frequently. Doctors make diagnoses, social workers decide if foster parents are good, bank lenders measure business risk, and employers have to hire employees.
Unfortunately, humans are also surprisingly bad at making the right prediction. Universally in all studies, algorithms have beaten or matched humans in making accurate predictions. And even when algorithms match human performance, they still win because algorithms are so much cheaper.
Why are humans so bad? Simply put, humans overcomplicate things.
Simple algorithms are surprisingly good predictors. Even formulas that put equal weighting on its factors can be as accurate as multiple-regression formulas, since they avoid accidents of sampling. Here are a few examples of simple algorithms that predict surprisingly accurately:
There is still some stigma about life being pervaded by robotic algorithms, removing some of the romance of life.
But this stigma against algorithms is dissipating as they recommend us useful things to buy and form winning baseball teams.
When hiring for a job, Kahneman recommends standardizing the interview:
Clearly plenty of people develop skilled intuitions. Chess players do spot meaningful moves; doctors do make correct diagnoses. Within academia, the movement of Naturalistic Decision Making (NDM) puts faith in human intuition.
When can you trust human intuition? Kahneman argues accurate human intuition is developed in situations with two requirements:
Here are a few examples:
Training can even occur theoretically, through words or thoughts. You can simulate situations and rehearse them in your brain to learn. For example, a young military commander can feel tension when going through a ravine for a first time, because he learned that this scenario invited an ambush from enemies.
To the NDM camp, Kahneman concedes that in situations with clear signals, formulas do not identify new critical factors that humans miss. Humans are efficient learners and generally don’t miss obvious predictors. However, algorithms do win at detecting signals within noisy environments.
In the brain, how do accurate intuitive decisions arise? They first arise from pattern matching in memory - System 1 retrieves a solution that fits the situation. System 2 then analyzes it, modifying it to overcome shortcomings until it seems appropriate. If the solution fails, another solution is retrieved and the process restarts.
Not all supposed experts have real predictive skill. The problem with pundits and stock pickers is that they don’t train in predictable environments. When noise dominates the outcomes and feedback cycles are long, any confidence in the intuition’s validity is largely illusory.
Even worse, there can be “wicked” environments, where you learn the wrong lessons from experience if you influence the outcome. For example, doctor Lewis Thomas felt he could predict typhoid by touching the patient’s tongue. In reality, he carried typhoid on his hands—he was actually causing typhoid by touching the patient’s tongue.
Another sign of untrustworthy intuition is high confidence without good explanation. True experts know the limits of their knowledge and are willing to admit it. In contrast, people who are firmly confident all the time, like pundits on TV, may be hired more for their boisterousness than their accuracy.
How confident you feel about your intuition is not a reliable guide to its validity. You need to learn to identify situations in which intuition will betray you.
For example, you might conflate short-term results with long-term results. A psychiatrist may feel skilled in building short-term rapport with patients and believe she’s doing a great job; but short-term rapport might not correlate strongly with long-term outcomes, which take in many more factors.