Lesson: As algorithms provide increasingly accurate suggestions, their convenience is almost irresistible—but relying on algorithms to make your decisions causes you to lose the freedom and ability to make your own choices.
In addition to threatening jobs, technology threatens human liberty, as algorithms learn so much about people that they gain an immense power to influence and manipulate. This is another way that technology is undermining liberalism, which is all about freedom and personal liberties—to vote, to buy goods in a free market, and to pursue individual dreams and goals with the protection of human rights.
Liberalism maintains that everyone has free will, regardless of education and social status. In practice, people’s choices of free will reflect their feelings more often than their knowledge. For example, between two presidential candidates, voters are more likely to choose the one who gives them a good feeling, even if the other candidate has a more thorough policy plan. Similarly, elected officials often make decisions based on gut feelings and intuition, even when they go against advisors’ recommendations. From the way voters vote to the way leaders lead, democracy hinges on emotion-driven free will—but technological advancements could make it possible to hack people’s emotions, leading to disastrous results.
Before the advent of liberalism, societies were guided by mystical, divine messages from the gods. In the last few centuries, the authority shifted from gods to free will. Although free will feels free, it’s actually a biochemical response honed by evolution and designed to help you survive and thrive. For example, when you see a snake, your reaction to run away is merely an evolutionary response to keep you safe. Similarly, when you feel bad after having an argument with a friend, your desire to make amends is not purely emotional, but rather a function of your biological wiring to cooperate within a community.
This biochemical process meant to promote your safety and well-being—which we call free will— has historically been a perfectly valid method of making decisions and running democracies. However, science has developed technology that can not only replicate that process but also perform it better than you can. As people shift authority from free will to computer algorithms, liberalism becomes increasingly obsolete.
People have already delegated some tasks to algorithms: You let Netflix suggest your next movie, and Google maps tells you when and where to turn. Each decision that algorithms make for you has two effects:
The algorithms won’t be perfect, and they won’t make the best decision every time—but they don’t have to. As long as algorithms make better choices on average than humans do, they’ll still be considered a better alternative. Additionally, if people wear biometric sensors on or inside their bodies, those sensors can monitor heart rate, blood pressure, and other indicators of your preferences, opinions, and emotions. Using this data, the computer can make even more well-informed decisions for you.
The reliance on algorithms can easily snowball to more and bigger decisions, such as where to go to college, which career to pursue, and who to marry. An algorithm that uses your biometric data can learn what makes you laugh, what makes you cringe, and what makes you cry. This algorithm could use that data to find a compatible partner for you to marry, and it would probably make a better choice than you would with your free will, since your decision might be influenced by a past breakup or be otherwise biased in some way.
If computers make all of your big decisions, your life would probably be much smoother without dealing with the stress of decision-making or the consequences of poor choices. But what would that life be like? So much of the drama and action in day-to-day life revolves around decision-making—from deciding whether to take on a project at work to figuring out where to relocate your family. The value humans place on decision-making is reflected in various institutions. For example:
When humans rely on algorithms to make every choice for them—essentially molding the path of their lives—what will humans’ role be, besides providing biometric data to be used in the decision-making process and then carrying out the verdict?
Some of the most difficult and nuanced decisions people have to make are about ethical dilemmas. If they’re programmed to do so, algorithms could even handle ethical decisions—but the capability would come with pros and cons.
On the positive side, algorithms would make the ethical choice every time. The computer wouldn’t be swayed by selfish motives, emotions, or subconscious biases, as humans are. Regardless of how resolute a person may be about ethics, in a stressful or chaotic situation, emotion and primitive instincts kick in and can override philosophical ethics. Additionally, a hiring manager can insist that racial and gender discrimination are wrong—but her subconscious biases may still prevent her from hiring a black female job applicant.
On the negative side, delegating decisions to machines that follow absolute ethics raises the question: Who decides which philosophy is programmed into the software? Imagine a self-driving car cruising down a road when children run into the street in front of it. In a split second, the car’s algorithm determines that there are two choices:
Alternatively, the self-driving car manufacturer could offer two models of the car, each of which follows a different philosophy. If consumers have to choose which model to buy, how many will choose the car that sacrifices them? Although many people might agree that the car should spare the children in a hypothetical situation, few would actually volunteer to sacrifice themselves in order to follow ethics (this brings us back to the point above, that humans often don’t follow ethics in real-life situations).
Another possibility is that the government mandates how the cars are programmed. On one hand, this gives the government the power to pass laws that are guaranteed to be followed to a tee, since the computers won’t deviate from their programming. On the other hand, this practically amounts to totalitarian power, because lawmakers are determining the actions of computers that are entrusted with making decisions for people.
The potential dangers of AI are scary, but some of them are already a reality. Corporations, banks, and other institutions already use algorithms to make decisions, such as which loan applicants to approve or deny. On the positive side, an algorithm can’t racially discriminate against an applicant (unless it’s programmed to do so). On the negative side, the algorithm may discriminate against you based on individual characteristics—it could be something in your DNA, or your social media account. With algorithms in charge, you’re more likely to face discrimination based on who you are, rather than to which group you belong.
This shift brings two consequences:
The example of self-driving cars highlights one of the dangers of AI: The computer does whatever it’s programmed to do, no matter what. In some cases, that characteristic makes computers less dangerous than humans, because the computers won’t succumb to anger or retaliation and break the rules. However, the opposite side of the coin is that computers won’t be influenced by compassion or extenuating circumstances. In other words, robots are as benign or as dangerous as the people who program them—and, in the hands of corrupt, violent, or power-hungry people, robots could bring devastation to humans.
In the 21st century, AI could become widespread in countries run by dictators. Consider the possibilities if this technology is used in:
In the last century, democratic countries were more prosperous than dictatorships because they assigned many people to help process information for decision-making. With such a large volume of information—for example, to make a decision on whether to impose a new tariff—a larger number of people were able to process it and reach a decision more quickly, enabling the country to act promptly and, thus, prosper. Dictatorships, on the other hand, concentrated information and responsibility among a small group, which slowed the processing and decision-making.
By contrast, in the 21st century, AI could give dictatorships a competitive advantage. First, algorithms can process information much more rapidly than humans, which would close the gap that currently gives democracies an advantage over dictatorships. Second, the more information an algorithm processes, the more it learns and the more accurate it becomes—and dictators are likely to collect more information than democracies. For example, a democratic country keeps citizens’ medical records private, while an authoritarian government may collect not only medical records but also DNA scans. That kind of massive database of information would let a dictator know practically everything about her citizens, enabling her to wield immense control over them.
Although AI could develop to the point that programmers could wire it with consciousness—which would essentially give computers a mind of their own, as science fiction thrillers forewarn—the possibility is remote. The larger danger is that humans put so much effort into developing AI that they neglect to develop their own consciousness and ability to discern. If people come to rely on computers for everything—and distrust their own instincts and capabilities in the process—then they become easy victims for manipulation. In fact, this threat has begun to come true in elections all over the world, as social media bots exploit voters’ fears and prejudices in order to influence their political actions.
In order to avoid falling victim to total mind control by AI, humans must devote more time and energy to researching and developing human consciousness. Furthermore, this commitment must be prioritized above immediate economic and political benefits. For example, many people managers expect their employees to respond promptly to emails, even after hours. That expectation causes employees to compulsively check and answer emails, even at the expense of their experiences and sensations—during dinner, they may be so consumed in their email that they don’t even notice the taste and texture of their food. If humans follow this road, they will become cogs in a machine run by robots, and they’ll lose the ability to live up to their potential as individuals.