When does it make sense to let people make active choices on their own, and when is it preferable to design default rules that “nudge” people in a certain direction — for example, to become an organ donor or to use energy generated by wind?
In modern societies, individuals face a barrage of complicated choices: how to set up retirement accounts; how much to save; whether to waive collision coverage on rental car agreements, and so on.
Decisions take time and attention, and people are busy. Default rules determine what happens if people choose to do nothing.
Depending on what you are trying to achieve, changing default rules can be a particularly powerful tool that institutions have, argues Harvard Law School professor Cass R. Sunstein — “perhaps more effective than significant economic incentives.”
In “Deciding by Default,” in the University of Pennsylvania Law Review (vol 162, issue 1, December 2013), Sunstein examines the rationale for default rules. Why and when would organizations use blanket rules instead of allowing individuals to make their own choices? Why and when would they use personalized rules based on a person’s individual profile, such as demographic data?
Default rules, Sunstein explains, don’t impose mandates or bans. Rather, they steer people in a particular direction — while offering opportunities to opt out — and produce outcomes that institutions want at costs that are lower than economic incentives.
By contrast, requiring individuals to make their own choices can impose high costs in terms of the time it takes to learn about the options.
The job of “choice architects,” according to Sunstein, is to understand decision costs (including how confusing the decision is and how heterogeneous the pool of decision makers is) and the costs of errors (what happens when people decide in a way that’s detrimental to them or to other members of a group).