What if our big bets on artificial intelligence go wrong?

In case you haven’t heard, Artificial Intelligence is here at last, and from the attention it’s getting, you would be forgiven for thinking it’s the next big thing, the best thing since sliced bread, and King Kong on a bad day all rolled into one. In fact, the word on the street is that AI will either solve every problem known to humankind or kill us while we sleep — or perhaps both.

While the reality will surely fall between those extremes, one thing we know for certain is that AI is already big business. Gartner, a research and advisory company, estimates that AI will deliver more than $1 trillion in business value in 2018, while the likes of Google, Facebook, Microsoft, and other tech giants are pumping billions into AI research. Big business is racing to develop both cost-saving and innovative AI technologies out of fear of being out-disrupted, while consumers have shown few qualms scooping up in-home intelligent assistants and relying on the AI that underlies an increasing number of digital experiences.

“AI promises to be the most disruptive class of technologies during the next 10 years,” says Gartner’s John-David Lovelock. With it comes the likelihood of significant economic and social disruption, all of which may still pale in comparison to the impact of AI in decades following.

Recent opinions published here (“Future is here, and better with AI” and “Blame sci-fi, cultural conditioning for AI fears,” May 6) promote the view that AI will make our lives better in a host of ways with few corresponding risks. The promises, according to these authors, range from the lofty (better autism treatment) to the questionable (better binge-watching recommendations).

In stark contrast, AI’s critics — including technologist Elon Musk, Oxford philosopher Nick Bostrom, the late physicist Stephen Hawking, 2020 presidential hopeful Andrew Yang, and even (most recently) statesman Henry Kissinger — have challenged proponents by pointing out not only the voluminous risks of developing AI, but also the pervasiveness of deep ethical questions and sheer unknowns on the other side of the AI frontier.

One of my concerns about AI is that, far closer to reality than AI’s hypothetical cancer-curing capabilities, much of the here-and-now push toward AI is driven by the demand for convenience, cost-savings, entertainment, and the sacred cow of “economic growth.” But those of us who are troubled by our culture’s increasing consumerism and individualism are more likely to see these efforts as built on the insidious idea that more, faster, cheaper, easier takes us closer to, instead of farther from, stronger communities, happier homes, and more fulfilling lives.

Also worrisome is that, like most major technological changes, AI will create winners and losers in the socioeconomic realm, even if the attention-hogging “average American” (wherever he or she may be hiding) is made better off. Gartner notes that — surprise! — manufacturing will be hit hard by AI, while a recent Gallup poll shows that a significant majority of Americans believe AI will widen the gap between rich and poor. Real families and communities will lose out, even as this paper recently reported that already “Dayton jobs pay too little” (May 6).

Sidestepping most serious critiques, AI’s proponents argue that critics’ concerns are little more than far-fetched, science fiction-fueled fears without basis in fact and counter with apples-to-oranges comparisons of AI with previous innovations such as the word processor or even the car. And while Hollywood’s ability to hyperbolize for dramatic effect is undeniable, saying that the “Terminator” film series errs on the extreme is not much comfort. Besides, once the facts on AI are in, which will take decades, it will be too late to change course.

Humanity is poised to make a large collective gamble on AI, but as a prelude to any decision with such far-reaching consequences, it’s critical to reflect and ask: what if we make the wrong bet? Yes, if AI’s critics are wrong but win out and we pass up on AI, we will never know what we missed — that is, besides smarter in-home assistants, more apps that can predict our needs before we know them, and the chance to strengthen the economy in poorer parts of the country like Silicon Valley.

On the other hand, what if we bet the farm on AI, and its proponents are wrong?

Andrew McKenzie is a technology consultant whose current focus is digital banking experiences. He lives in Dayton.


One of my concerns about AI is that, far closer to reality than AI’s hypothetical cancer-curing capabilities, much of the here-and-now push toward AI is driven by the demand for convenience, cost-savings, entertainment, and the sacred cow of “economic growth.”

About the Author