Part 3-2: Formulas Beat Intuitions

Humans have to make decisions from complicated datasets frequently. Doctors make diagnoses, social workers decide if foster parents are good, bank lenders measure business risk, and employers have to hire employees.

Unfortunately, humans are also surprisingly bad at making the right prediction. Universally in all studies, algorithms have beaten or matched humans in making accurate predictions. And even when algorithms match human performance, they still win because algorithms are so much cheaper.

Why are humans so bad? Simply put, humans overcomplicate things.

  • They inappropriately weigh factors that are not predictive of performance (like whether they like the person in an interview).
  • They try too hard to be clever, considering complex combinations of features when simply weighted features are sufficient.
  • Their judgment varies moment to moment without them realizing it. System 1 is very susceptible to influences without the conscious mind realizing. The person’s environment, current mood, state of hunger, and recent exposure to information can all influence decisions. Algorithms don’t feel hunger.
    • As an example, radiologists who read the same X-ray twice give different answers 20% of the time.
  • Even when given data from the formula, humans are bad! They feel falsely they can “override” the formula because they see something that’s not accounted for.

Simple algorithms are surprisingly good predictors. Even formulas that put equal weighting on its factors can be as accurate as multiple-regression formulas, since they avoid accidents of sampling. Here are a few examples of simple algorithms that predict surprisingly accurately:

  • How do you predict marital stability? Take the frequency of sex and subtract the frequency of arguments.
  • How do you predict whether newborns are unhealthy and need intervention? A while back, doctors used their (poor) judgment. Instead, in 1952 doctor Virginia Apgar invented the Apgar score, a simple algorithm that takes into account 5 factors, such as skin color and pulse rate. This is still in use today.
  • How do you predict wine prices for a bottle of wine? Traditionally, wine enthusiasts tasted the bottle, then assigned a hypothetical price. Instead, an economist used only two variables—the summer temperature and rainfall of the vintage. This was more accurate than humans, but wine experts were aghast—“how can you price the wine without tasting it?!” The formula was in fact better because it did not factor in human taste.

There is still some stigma about life being pervaded by robotic algorithms, removing some of the romance of life.

  • Professionals who use intuition to predict things feel outrage when algorithms encroach on their profession. They feel many of their predictions do turn out correct and that they have skill, but their downfall is they don’t know the boundaries of their skill.
  • It seems more heart-wrenching to lose a child due to an algorithm’s mistake than because of a human error.

But this stigma against algorithms is dissipating as they recommend us useful things to buy and form winning baseball teams.

Antidote to Intuitions

When hiring for a job, Kahneman recommends standardizing the interview:

  • Make a list of up to 6 traits important for success in the role. Ideally these are orthogonal traits that don’t overlap strongly with each other.
  • Create direct questions to assess each trait. Use as little discretion as possible.
    • If assessing conscientiousness, you might ask, “how many times were you late to work in the past month?”
  • Create a rubric from 1 to 5, with notes on what you’re looking for in each grade.
  • When interviewing, collect information on one trait at a time, completing it before moving on.
  • Finally, hire the person with the highest score, period. Do not override this rule to favor someone else your intuition likes better—you’re letting the halo effect and liking bias override you.

When Can You Trust Human Intuition?

Clearly plenty of people develop skilled intuitions. Chess players do spot meaningful moves; doctors do make correct diagnoses. Within academia, the movement of Naturalistic Decision Making (NDM) puts faith in human intuition.

When can you trust human intuition? Kahneman argues accurate human intuition is developed in situations with two requirements:

  • An environment that is sufficiently regular to be predictable, with fast feedback
  • Prolonged practice to learn these regularities

Here are a few examples:

  • Most people can learn to drive. The input → output relationship is clear and immediate.
  • Doctors can learn to diagnose with limited data because they receive fast feedback on the true diagnosis from more testing.

Training can even occur theoretically, through words or thoughts. You can simulate situations and rehearse them in your brain to learn. For example, a young military commander can feel tension when going through a ravine for a first time, because he learned that this scenario invited an ambush from enemies.

To the NDM camp, Kahneman concedes that in situations with clear signals, formulas do not identify new critical factors that humans miss. Humans are efficient learners and generally don’t miss obvious predictors. However, algorithms do win at detecting signals within noisy environments.

In the brain, how do accurate intuitive decisions arise? They first arise from pattern matching in memory - System 1 retrieves a solution that fits the situation. System 2 then analyzes it, modifying it to overcome shortcomings until it seems appropriate. If the solution fails, another solution is retrieved and the process restarts.

When Not to Trust Intuition

Not all supposed experts have real predictive skill. The problem with pundits and stock pickers is that they don’t train in predictable environments. When noise dominates the outcomes and feedback cycles are long, any confidence in the intuition’s validity is largely illusory.

Even worse, there can be “wicked” environments, where you learn the wrong lessons from experience if you influence the outcome. For example, doctor Lewis Thomas felt he could predict typhoid by touching the patient’s tongue. In reality, he carried typhoid on his hands—he was actually causing typhoid by touching the patient’s tongue.

Another sign of untrustworthy intuition is high confidence without good explanation. True experts know the limits of their knowledge and are willing to admit it. In contrast, people who are firmly confident all the time, like pundits on TV, may be hired more for their boisterousness than their accuracy.

How confident you feel about your intuition is not a reliable guide to its validity. You need to learn to identify situations in which intuition will betray you.

For example, you might conflate short-term results with long-term results. A psychiatrist may feel skilled in building short-term rapport with patients and believe she’s doing a great job; but short-term rapport might not correlate strongly with long-term outcomes, which take in many more factors.