Part 4-3: Variations on a Theme of Prospect Theory

Indifference Curves and the Endowment Effect

Basic theory suggests that people have indifference curves when relating two dimensions, like salary and number of vacation days. Say that you value one day’s salary at about the same as one vacation day.

Theoretically, you should be willing to trade for any other portion of the indifference curve at any time. So when at the end of the year, your boss says you’re getting a raise, and you have the choice of 5 extra days of vacation or a salary raise equivalent to 5 days of salary, you see them as pretty equivalent.

But say you get presented with another scenario. Your boss presents a new compensation package, saying that you can get 5 extra days of vacation per year, but then have to take a cut of salary equivalent to 5 days of pay. How would you feel about this?

Likely, the feeling of loss aversion kicked in. Even though theoretically you were on your indifference curve, exchanging 5 days of pay for 5 vacation days, you didn’t see this as an immediate exchange.

As with prospect theory, the idea of indifference curves ignores the reference point at which you start. In general, people have inertia to change.

They call this the endowment effect. Before you have something, you might have a certain indifference curve. But once you get it, a new reference point is set, and from this point loss aversion sets in—the utility of a gain is less than a corresponding loss.

Here are a few examples of when people overvalue things once they own it:

  • Sometimes you’re only willing to sell something for higher than you can buy it on the market (like a collector’s item).
  • Famous experiment: half of the subjects were given mugs and were asked to quote prices to sell their mug; the other half, who were not given mugs, were asked to bid for the cups. The buyers had an average bid of $2.87; the cup owners quoted $7.12, over double the buyers’ bid! Note that the cup owners had just received their cups a few minutes earlier.
    • Notably, a third group (Choosers) could either receive a mug or a sum of money, and they indicated that receiving $3.12 was as desirable an option as receiving the mug. It’s important to note that the mug owners had been given an essentially identical choice: they were either going to leave with the mug or with the money someone paid for it. But because they already possessed the mug, they vastly embellished their ask.
  • Owners who buy a house at higher prices spend longer trying to sell their home and set a higher listing price - even though the current market value is rationally all that matters.

The endowment effect doesn’t occur in all cases - people are willing to exchange $5 for five $1 bills, and furniture vendors are happy to exchange a table for money. When the asset under question is held for exchange, the endowment effect doesn’t apply.

You only feel endowed with items that are planned for consumption or use, like a bottle of wine or vacation days.

As with prospect theory and loss aversion, experienced financial traders show less attachment to the endowment effect.

Goals are Reference Points

We are driven more to avoid failing a goal than to exceed it. Failing a goal is perceived as a loss; exceeding the goal is a gain.

Examples:

  • Studies of cab drivers show they have a daily target income. When the going is good on a particular day, they clock out early, rather than making the most of the day to offset less profitable days.
  • Golfers are more accurate when they putt for par (to avoid a bogey, or one over par) than when they aim for a birdie (one under par). The difference in success is 3.6%.

Status Quo Bias

In another reframing of loss aversion, we are biased toward keeping the status quo. Two effects are at play here: 1) the endowment effect exaggerates the value of what you have, warping your prior indifference curve, and 2) loss aversion makes you hesitant to take on risky bets, since losses are more painful than gains.

Examples:

  • In negotiations, concessions are painful because they represent losses from the status quo. Both parties are trying to make concessions, but because losses outweigh gains, the concessions from the other side don’t make up for your personal concessions. This is why negotiations can end up feeling like everyone walks away unhappy. This feeling is aggravated in situations where parties are merely allocating losses, like an ailing company negotiating with a labor union on how to cut staff, instead of negotiating to split a growing pie.
  • Political reform is difficult because the people who have something to lose mobilize more aggressively than those who win. Reforms thus often include grandfather clauses that protect existing stakeholders.

Fairness

The normal price of a good in a store is the reference point. Generally, it’s considered unfair to exploit market power to increase prices to match increased demand, unless the store must do so to retain profits. This is why there’s outrage when a supermarket in a blizzard suddenly raises prices on shovels.

Consider this scenario about pay: an employee is hired at $9 per hour, but the market rate drops to $7 per hour. It’s considered unfair to change the employee’s rate. But if that employee leaves, it’s acceptable to pay the new employee $7 per hour. Intuitively this sounds right, but rationally it sounds odd. The market rate should be the market rate.

  • The explanation: the old employee had a personal entitlement to the wage. The new employee has no entitlement to the previous worker’s wage. On the other side, the company has entitlement to retain its current profit, but not to increase it by encroaching on others’ entitlements. These are intuitive notions of fairness that aren’t often made explicit.

Employers who violate rules of fairness are punished by reduced productivity by employees and sales by customers.

(Shortform note: the rise of on-demand services like Uber and real-time pricing may blunt this effect over time, gradually educating the public about supply, demand, and pricing. Soon it might seem normal for shovels to be more expensive during snowstorms.)

Regret and Responsibility

In another twist, the feeling of regret depends on your default action and whether you deviate from it. If you do something uncharacteristic and fail, you’re more likely to feel regret, but others are less likely to blame you.

Consider: Alan never picks up hitchhikers. Yesterday he gave a man a ride and was robbed.

Barry frequently picks up hitchhikers. Yesterday he gave a man a ride and was robbed.

Who will experience greater regret?

Who will be criticized most severely by others?

Common answer: Alan will experience greater regret, but Barry will be criticized most severely. Alan will have wished he stayed the normal path and would want to undo the event. Barry will be seen as habitually taking unreasonable risks, and thus “deserving of his fate.”

(Shortform note: sadly, this might drive some people to blame victims of rape, who allegedly were “asking for it” through their typical dress or behavior.)

In some cases, the default option is to do nothing to avoid regret (e.g. not sell your stock), while the alternative unusual action is to do something.

Corollaries

If you do the normal thing and get a bad outcome, this feels better than doing the unusual thing and getting a bad outcome.

  • Selling stock X and missing out on stock X’s big gain feels worse than keeping stock X and missing out on a different stock Y’s big gain.
  • Volunteering for a clinical trial and getting a disease from that trial feels much worse than just living and getting the same disease.

If you do an unusual thing and get a good outcome, this feels better than doing a normal thing and getting a good outcome.

  • People are happier if they gamble and win, than if they refrain from the gamble and get the same amount.

If you anticipate a bad outcome and regret, you will tend to do the normal thing.

  • Consumers at a store who are reminded they may feel regret over their purchase tend to favor brand names over generics.
  • A surgeon may believe an experimental treatment is better for a patient but is more liable for a bad outcome than if she were to follow protocol. This anticipation of regret and punishment may lead the surgeon to take the less optimal treatment.
  • For employees who wish to start a company and strike out on their own, picturing their startup failing will hold people into staying in their salaried jobs.

If you anticipate a good outcome, you will tend to do the unusual thing.

Taboo tradeoff: there are certain things that people hold as sacrosanct (notably, health and safety), against which they would be unwilling to trade anything. This is driven by the prediction of strong regret if they made the trade and harm were to have resulted.

  • The governments of Europe use the “precautionary principle,” which prohibits any action that might cause harm. This overly cautious principle could have precluded innovations like airplanes, antibiotics, and X-rays.

The risk of regret is that it causes inappropriate loss aversion. (Shortform note: this is aggravated by two factors we’ve previously discussed: 1) because probabilities are overweighted at the edges, we overestimate low chances of harm, 2) loss aversion makes the losses feel more painful than the gains.)

Antidotes to Regret

As you journal your decisions, note the possibility of regret before deciding. Then if a bad outcome happens, remember that you considered the possibility of regret before you made your decision. This avoids the hindsight bias and the feeling of “I almost made a better choice and I should have known better.”

Downweight your future regret—in practice, even in the case of a bad outcome, you will deploy psychological defenses to soothe this pain.

Specificity and Emotion

When an event is made specific or vivid, people become less sensitive to probability (lower chances are overestimated and higher chances are underestimated).

When an event is specifically defined, your mind constructs a plausible scenario in which it can happen. And because the judgment of probability depends on the fluency to which it comes to mind, you overestimate the probability. On the other side of the spectrum, the possibility of its not occurring is also vivid, and thus overweighted.

Here are a few examples of how specificity decreases sensitivity to probability:

  • People were asked to estimate the chances that each of 8 NBA playoffs teams had in winning the championship. The sum of chances for all teams should of course total 100%. But for these subjects, the sum was 240%! The reason: when considering each team in isolation, it was easy to construct a plausible path to winning, while the alternative of (7 other teams) was a diffuse possibility. Thus each individual team’s chances was overestimated, when the attention was focused on that team.
    • This effect disappeared when the scenario was simplified, and subjects were asked to estimate the chance of the winning team coming from the Eastern vs the Western conference. In this case, the event and its alternative were equally specific, and it was clearer that the probabilities should add to 100%.
  • This partially explains the planning fallacy. The successful case is very available to the mind, so its likelihood is embellished. The many failure cases are diffuse and not concentrated on.
  • People become less sensitive to probability when the outcomes are emotional (like “getting a painful shock” or “kissing your celebrity crush”) rather than cash-based.
  • When people buy lottery tickets, they tend to bask in the fantasy of winning. This may make them less sensitive to the probability of winning.
  • (Shortform example: In cognitive behavior therapy for mental illness, patients are instructed to picture instances when they were competent. Making this vivid allows a depressed patient to increase their estimation of competence.)

(Shortform note: this effect could be the foundation of the common advice to visualize your success. Doing so helps you overcome the statistical knowledge of the low rate of success, which would otherwise drag you down psychologically.)

Denominator Neglect

Each rate or probability is a fraction, consisting of a numerator at the top (the events you are estimating) and a denominator (all the possible events).

Denominator neglect occurs when people focus on the size of the numerator, rather than examining the basal rate.

Here’s an example:

You have the choice of drawing a marble from one of two urns. If you draw a red marble, you win a prize.

  • Urn A contains 10 marbles, of which 1 is red and the rest are white.

  • Urn B contains 100 marbles, of which 8 are red and the rest are white.

Which do you choose?

30% of subjects choose Urn B, since it has a larger number of marbles. Before you think this is silly, imagine a picture of both urns. Urn A has just 1 winning marble among other white marbles. In contrast, Urn B has 8 winning chances, standing out among the white marbles. Urn B conveys a more hopeful feeling.

Here are more examples of denominator neglect:

  • Percentages and fractions are mathematically equivalent, but one is more impactful than the other. A 0.001% of permanent disability seems low. On the other hand, “one out of 100,000 children will be disabled” seems high. In the latter description, the other 99,999 children have faded into the background—the focus is on the one child who gets disabled.
  • Manipulation of the fraction also makes a difference. People who read about “a disease that kills 24 people out of 100” rated it less dangerous than people who read about “a disease that kills 1,286 people out of every 10,000.” The first disease is actually nearly twice as deadly, but the second numerator is much larger and thus looks more dangerous.
    • (Note that this study was a between-subjects study, in that one group saw one presentation and gave a rating, and another group saw the other presentation. Prompting a direct comparison within one person would invoke System 2 and likely cause less error.)

Antidotes to Specificity and Denominator Neglect

(Shortform note: the following are our additions and not explicitly described in the book.)

  • When estimating the chances of a project working, also picture the failure cases vividly and estimate their probabilities. This will counteract the vividness of the success case.
  • When you hear a vivid story about how things will work, strip away the irrelevant details to regain sensitivity to probabilities.
  • To avoid denominator neglect, always reduce two statistics to the same denominator for comparison. For instance, if given two fractions, 3 out of 100 and 400 out of 10,000, convert them both to a denominator of 10,000. Compare apples to apples.

In some cases, you might exploit these biases for your own gain to overcome your hesitation:

  • If you’re hesitant because you know you have a small chance of success (like starting a new business), paint a vivid picture of the success you could enjoy. This will help you overestimate the chances of success. (Though use this with caution, because you might not want to delude yourself too much.)
  • If you’re facing a bad outcome that is likely, then think about the good outcome and make it more vivid. This will help counteract the vividness of the bad outcome. For instance, if you’re suffering from cancer and are afraid that your chemo treatment won’t work, picture the inverse event of being cured and celebrating your grandson’s birthday.
  • To embellish the risk of a number, make the denominator large and focus on the numerator. To make a risk look less dangerous, make the denominator small and focus on the percentage.