When you evaluate a decision, you’re prone to focus on the individual decision, rather than the big picture of all decisions of that type. A decision that might make sense in isolation can become very costly when repeated many times.
Consider both decision pairs, then decide what you would choose in each:
Pair 1
A certain gain of $240.
25% chance of gaining $1000 and 75% chance of nothing.
Pair 2
A certain loss of $750.
75% chance of losing $1000 and 25% chance of losing nothing.
As we know already, you likely gravitated to Option 1 and Option 4.
But let’s actually combine those two options, and weigh against the other.
1+4: 75% chance of losing $760 and 25% chance of gaining $240
2+3: 75% chance of losing $750 and 25% chance of gaining $250
Even without calculating these out, 2+3 is clearly superior to 1+4. You have the same chance of losing less money, and the same chance of gaining more money. Yet you didn’t think to combine all unique pairings and combine them with each other!
This is the difference between narrow framing and broad framing. The ideal broad framing is to consider every combination of options to find the optimum. This is obviously more cognitively taxing, so instead you use the narrow heuristic—what is best for each decision at each point?
An analogy here is to focus on the outcome of a single bet, rather than assembling a portfolio of bets.
Yet each single decision in isolation can be hampered by probability misestimations and inappropriate risk aversion/seeking. When you repeat this single suboptimal decision over and over, you can rack up large costs over time.
Practical examples:
To overcome narrow framing, adopt risk policies: simple rules to follow in individual situations that give a better broad outcome. Examples:
Related to narrow framing - we have a tendency to bin outcomes into separate accounts. This provides organization and simplifies calculations, but can become costly. People don’t like to close accounts with a negative balance.
Examples of when binning is harmful:
Binning isn’t always harmful. Here are examples of when keeping separate accounts might be helpful:
Because of a narrow frame, System 1 doesn’t know what information it’s missing. It doesn’t consider the global set of possibilities and make decisions accordingly. This can lead to odd situations where you make one set of decisions when considering two cases individually, then a contradictory decision when you consider them jointly.
Consider a nonprofit that reduces plastic in the ocean to avoid poisoning and killing baby dolphins. How much are you willing to donate to support the cause?
Now consider the discovery that farmers are at significantly higher risk of skin cancer because of the time they spend outdoors. How much are you willing to donate to a nonprofit that reduces cancer for at-risk groups?
You are likely more willing to pay a higher amount for the dolphins than the farmers. In isolation, each scenario engages a different “account” and intensity of feeling. Saving baby dolphins employs the “save the animals” account, and baby dolphins is high on the scale (relative to saving wasps). You intensity match the high emotion you feel to an amount you’re willing to pay.
In contrast, the farmer skin cancer example evokes a “human public health” account, in which farmer skin cancer is likely not as high a priority as baby dolphins is on the saving animals account. Emotion is lower on this account, so the donation is lower.
Now consider the two examples together. You have the option of donating to a dolphin-saving non-profit, or a human cancer-reducing non-profit. Which one are you more willing to pay for? Likely you now realize that humans are more important than animals, and you recalibrate your donations so the second amount exceeds the first.
Judgment and preferences are coherent within categories, but may be incoherent when comparing across categories.
And because of the “what you see is all there is” bias, alternative categories may not be available for you to consider. Much of life is a between-subjects trial - we only get exposed to one major event at a time, and we don’t think to compare across instances.
Problems can arise when you don’t have a clear calibration of a case to the global set of cases. Examples:
(Shortform note: the following are our additions and not explicitly described in the book.)
The context in which a decision is made makes a big difference in the emotions that are invoked and the ultimate decision. In particular, even though a gain can be logically equivalently defined as a loss, because losses are so much more painful, the decisions may be contradictory.
Consider how enthusiastic you are about each opportunity:
A 10% chance to win $100, and a 90% chance to lose $5.
Buying a $5 raffle ticket that gives you a 10% chance to win $105 and a 90% chance of winning nothing.
These are logically identical situations - yet the latter opportunity is much more attractive! Loss aversion is at play here again. Losses are more painful than uncaptured gains.
Even though framing makes a large difference, most of us accept problems as they are framed, without considering alternative framings.
Consider two framings of two vaccine programs that can save 600 people affected by a virus:
Program A will save 200 people. Program B has ⅓ chance of saving 600 and ⅔ chance of saving none.
Program A will leave 400 people dead. Program B has ⅓ chance that nobody will die, and ⅔ chance that 600 will die.
Per prospect theory, you can predict that people prefer A in the first set and B in the second set. But again, these framings are logically equivalent.
Consider which is better for the environment:
Adam switches from a car with 12mpg to a car with 14mpg.
Barry switches from a car with 30mpg to a car with 40mpg.
Barry’s move looks like a bigger jump (10mpg and 33% jump) but this is deceptive. If they both drive the same distance in a year, Adam will use 119 fewer gallons of gas, whereas Beth will cut her use by only 83 gallons. Rather than miles per gallon, the real metric that matters is gallons per mile, which inverts the mpg, and differences in fractions are less intuitive.
For an extreme, consider going from 1mpg to 2mpg vs 100mpg to 200mpg. Over 1000 miles, the former will save 500 gallons; the latter, only 5!
Here are more examples of how framing leads to distorted interpretations:
Even experts like doctors and public health officials suffer from the same framing biases. Even more troubling, when presented with the inconsistency, people cannot explain the moral basis for their decisions. System 1 used a simple rule—saving lives is good, deaths are bad—but System 2 has no moral rule to easily solve the question.
We’ve shown that humans are not rational in the decisions they make. Unfortunately, when society believes in human rationality, it also promotes a libertarian ideology in which it is immoral to protect people against their choices. “Rational people make the best decisions for themselves. Who are we to think we’re better?” This leads to beliefs like:
This belief in rationality also leads to a harsher conclusion: people apparently deserve little sympathy for putting themselves in worse situations. Elderly people who don’t save get little more sympathy than people who complain about a bill after ordering at a restaurant. Rational agents don’t make mistakes.
Behavioral economists believe people do make mistakes and need help to make more accurate judgments. They believe freedom is a virtue worth having, but it has a cost borne by individuals who make bad choices (that are not completely their fault) and by a society that feels obligated to help them.
The middle ground might be libertarian paternalism, in which the state nudges people to making better decisions and give the freedom for people to opt out. This includes nudges for retirement savings, health insurance, organ donation, and easy-to-understand legal contracts.