The rise of artificial intelligence has been met with reactionary fears of robots taking over. We’ve all seen the movies. What’s left out of this conversation is a more practical threat. We should be concerned that AI will be hijacked, not by rogue computers out to destroy mankind, but by people with ulterior motives.

That’s five isolated apps and five separate steps to pull off something as simple as pizza night. The rest of our lives are equally as, if not more, complicated. There are hundreds of apps, each designed to help us with a small slice of our life. Yet in pursuit of simplicity, we’ve actually made life more complex.

A basic form of AI already here is called decision support. It helps us make decisions based on our behavior: Recommendation engines suggest just the right items for us to buy, and navigation systems tell us the best way to drive home. As AI advances, it will embed itself even deeper into our social fabric, shaping everything from how we do business to how we receive medical care.

So what happens when AI-powered assistance is so commonplace that we become dependent on it?

Fear of Deciding Alone, aka FODA. When deeply quantified support is readily at hand, we may grow to doubt many of the decisions we make without support. There is an apt parallel in FOMO (fear of missing out), a silly meme with serious underpinnings: Social media has warped our human instinct for recognition from our peers, creating a landscape in which we present the best versions of ourselves. Life looks like one big party, and if we don’t keep up, we miss all the fun. FODA is borne from the same human desire, only in this case we look to machines, not each other, for validation.

Our growing dependence on decision support is where artificial intelligence is most immediately dangerous. Behind every computer algorithm is a programmer. And behind that programmer is a strategy set by people with business and political motives. It would be easy enough for the people who design AI systems, motivated by greed, self-interest, or politics, to train computers to manipulate our lives in subtle and insidious ways, essentially lying to us through the algorithms that guide our thinking. And because we are so terrified of making our own decisions, we go along with it. The coming tidal wave of decision support threatens to give very few people a phenomenal amount of suggestive power over a great many people—the kind of power that is hard to trace and almost impossible to stop.

“Behind every computer algorithm is a programmer. And behind that programmer is a strategy set by people with business and political motives.”


This is the butterfly effect, wherein tiny differences in the world can cascade into massive changes over time. In the case of artificial intelligence, it plays out through subtly corrupted software algorithms. For example, a computer programmer can make the smallest tweak to a search algorithm to direct people to one type of content over others. A subtle, undetectable change in one system can alter the outcome for billions of people. Such power is priceless to a motivated politician or business. And it is the most pressing, worrisome challenge we face as we move toward a world in which computers make more and more decisions for us.

Decision support is becoming infrastructure. In the same way that roads have paved the way for cars instead of horses and buggies, decision support will deliver the next level of medicine, retail, way-finding, and more.

“A few people could exert power over a great many people—the kind of power that is hard to trace and almost impossible to stop.”


There is hope. This form of artificial intelligence doesn’t have to be something we fear. Our world is full of situations in which we react with our most animalistic instincts. Political positions, financial decisions, attitudes toward social justice—our biggest decisions are often fueled by poor logic and misinformation. In the best circumstances, artificial intelligence could save us from ourselves, by helping us understand each other, see the world more clearly, and collectively make better decisions. But we will have to be very careful. And the onus will be, in part, on designers to develop human-centered solutions that resist corruption. If we care about the world we live in, we should think long and hard about the interfaces, rules, and policies that will govern artificial intelligence and our new way of life.