How will AI change our lives? Experts can’t agree — and that could be a problem.

How will AI change our lives? Experts can’t agree — and that could be a problem.

How will AI change our lives? Experts can’t agree — and that could be a problem.

Finding the best ways to do good. Made possible by The Rockefeller Foundation.

Artificial intelligence is playing strategy games, writing news articles, folding proteins, and teaching grandmasters new moves in Go. Some experts warn that as we make our systems more powerful, we’ll risk unprecedented dangers. Others argue that that day is centuries away, and predictions about it today are ridiculous. The American public, when surveyed, is nervous about automation, data privacy, and “critical AI system failures” that could end up killing people.

How do you grapple with a topic like that?

Two new books both take a similar approach. Possible Minds, edited by John Brockman and published last week by Penguin Press, asks 25 important thinkers — including Max Tegmark, Jaan Tallinn, Steven Pinker, and Stuart Russell — to each contribute a short essay on “ways of looking” at AI.

Architects of Intelligence, published last November by Packt Publishing, promises us “the truth about AI from the people building it” and includes 22 conversations between Ford and highly regarded researchers, including Google Brain founder Andrew Ng, Facebook’s Yann LeCun, and DeepMind’s Demis Hassabis.

Across the two books, 45 researchers (some feature in both) describe their thinking. Almost all perceive something momentous on the horizon. But they differ in trying to describe what about it is momentous — and they disagree profoundly on whether it should give us pause.

One gets the sense these are the kinds of books that could perhaps have been written in 1980 about the internet — and AI is, many of these experts tell us, likely to be a bigger deal than the internet. (McKinsey Global Institute director James Manyika, in Architects of Intelligence, compares it to electricity in its transformative potential.) It is easy for the people involved to see that there’s something enormous here, but surprisingly difficult for them to anticipate which of its potential promises will bear fruit, or when, or whether that will be for the good.

Some of the predictions here sound bizarre and science fictional; others bizarrely restrained. And while both books make for gripping reading, they have the same shortcoming: they can get perspectives from the preeminent voices of AI, they can list them next to each other in a table of contents, but they cannot make those people talk to each other.

Almost everyone agrees that certain questions — when general AI (that is, AI that has human-level problem-solving abilities) will happen, how it’ll be built, whether it’s dangerous, how our lives will change — are questions of critical importance, but they disagree on almost everything else, even basic definitions. Surveys show different experts estimating that we’ll arrive at general AI any time from 20 years to two centuries from now. That’s an astonishing amount of disagreement, even in a field as uncertain as this one.

I was intrigued, fascinated, and alarmed in turn by the takes on AI from these researchers, many of whom have laid the very foundations of the field’s triumphs today. But when I put these books down I mostly felt impatient. We need more than separate takes from each preeminent scholar — we need them to sit down and start building a consensus around priorities.

That’s because the worst-case scenario for AI is pretty horrific. And if that scenario ends up being true, the harm to humanity could be staggering. The disagreements on display in these anthologies aren’t just charming intellectual spats — they’re essential to the policy decisions that we need to make today.

Everyone would like us to read a history textbook

Today’s artificial intelligences are “narrow AI” — they often surpass human capabilities, but only in specific, bounded domains like playing games or generating images. In other areas, like translation, reading comprehension, or driving, they can’t yet surpass humans — though they’re getting closer.

“Narrow AI,” many expect, will someday give way to “general AI,” or AGI — systems that have human-level problem-solving abilities across many different domains.

Some of the researchers featured in Architects of Intelligence and Possible Minds are trying to build AGI. Some think that’ll kill us. And some think the whole endeavor is fanciful, or at least a puzzle we can safely leave for the 22nd century.

They do find some common ground, though: largely in complaining that the AI debate today lacks the context of the last one, and the one before that. When researchers first concluded that AI was possible in the 1940s and 1950s, they underestimated how difficult it would be. There were optimistic predictions that AGI was only a few decades out. While new tools and technologies have changed the AI landscape, that history has made AI researchers extremely wary of claiming that we’re close to AGI.

“Discussions about artificial intelligence have been oddly ahistorical,” Neil Gershenfeld, the director of MIT’s Center for Bits and Atoms, notes in his essay in Possible Minds. “They could better be described as manic-depressive: depending on how you count, we’re now in the fifth boom-and-bust cycle.”

Yoshua Bengio, a professor at the University of Montreal, picked a different metaphor that gets at the same idea. “We’re currently climbing a hill, and we are all excited because we have made a lot of progress on climbing the hill, but as we approach the top of the hill, we can start to see a series of other hills rising in front of us,” he tells Ford. In the introduction to Possible Minds, Brockman writes of the AI pioneers, “over the decades I rode with them on waves of enthusiasm, and into valleys of disappointment.”

The specter of those past “AI winters” — periods when advances in AI research stalled —haunts most of the essayists, whether or not they think we’re headed for another one. “We have been working on AI problems for over 60 years,” Daniela Rus, the director of MIT’s Computer Science & Artificial Intelligence Laboratory, says when Ford asks her about AGI. “If the founders of the field were able to see what we tout as great advances today, they would be very disappointed because it appears we have not made much progress.”

Even among those who are more optimistic about AI, there’s fear that expectations are rising too high, and that there might be backlash — less funding, an exodus of researchers and interest — if they’re not met.

“I don’t think there’ll be another AI winter,” Andrew Ng, co-founder of Google Brain, and Coursera, tells Ford. “But I do think there needs to be a reset of expectations about AGI. In the earlier AI winters, there was a lot of hype about technologies that ultimately did not really deliver. … I think the rise of deep learning was unfortunately coupled with false hopes and dreams of a sure path to achieving AGI, and I think that resetting everyone’s expectations about that would be very helpful.”

Alan Turing and John Von Neumann were some of the first to anticipate the potential of AI. Many of the questions that are being raised today — including the question of whether our mistakes have the potential to annihilate us — are questions they contemplated, too. Among the best parts of both books are the lengthy segments that the authors spend putting today’s achievements and worries into the context of the 30-year careers of many of these luminaries and the 70-year history of their field.

Putting the worries into context isn’t enough to make them fade, though. LeCun, famously skeptical of the idea we should worry about AI risks, nonetheless emphatically affirms to Ford that we’ll develop general AI someday, with all the implications that come with that.

Ng points out that AI has now embedded itself so thoroughly in industry, research labs, and universities that the frustration-driven collective disinterest that drove past AI winters seems unlikely. Even just fully exploring the implications of the techniques we’ve discovered so far will take many years, during which new paradigms, if they’re needed, can emerge.

The people working on AI largely believe we’ll get AGI someday, even if that day is distant. But not all of them are sure that that’s true. Google’s Ray Kurzweil, famous for his Singulatarian optimism, insists in his segment that that day is in 2029 — and, he tells Ford, “there’s a growing group of people who think I’m too conservative.”

The most profound disagreements are over two things: timelines and dangers

The experts in both books have extraordinarily varied visions of AI and what it means.

They stake out widely varied stances on the usefulness of the Turing Test — checking whether a computer can carry on a conversation and convince onlookers it’s human — for evaluating when an AI has human-level skills. They differ in how impressed they are with neural nets — the approach to AI behind most recent advances — and in how far they believe that the dominant deep learning AI paradigm will take us. It’s hard to encapsulate their varied visions in a way that does justice to the nuances of each position.

But there are a few obvious big disagreements. The first is when AGI will happen, with some experts confident that it’s distant, some confident that it’s terrifyingly close, and many unwilling to be nailed down on the topic — perhaps waiting to see what challenges come into focus when we crest the next hill in AI progress. It’s unusual to see disagreement this profound in a fairly mature field; it speaks to how much even the people actively working on AGI still disagree on what to expect.

Kurzweil, leading the pack with his 2029 prediction, is well known for predicting extremely fast technological progress. MIT’s Max Tegmark, featured in Possible Minds, is not, but his estimates are only slightly more conservative. He quotes a recent survey as finding “AI systems will probably (over 50 percent) reach overall human ability by 2040-50, and very likely (with 90 percent probability) by 2075.”

The second disagreement is over whether there’s a serious danger that AI will wipe out humanity — a concern that has become increasingly pronounced in light of recent AI advances. UC Berkeley’s Stuart Russell, present in both books, believes that it will. He’s joined by Oxford’s Nick Bostrom, Tegmark, and Skype billionaire and Center for the Study of Existential Risk founder Jaan Tallinn. Concern for AI risk is notably less commonly voiced by the researchers affiliated with Facebook, Google Brain, and DeepMind.

Norbert Wiener’s 1950 book The Human Use of Human Beings, the text that inspired Possible Minds, is among the earliest texts to grapple with the argument at the core of AI safety worries: that is, that the fact an advanced AI will “understand what we really meant” will not cause it to reliably adopt approaches that humans approve of. An AI “which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us,” he warned.

Steven Pinker, on the other hand, dedicates a significant share of his essay in Possible Minds to ridiculing the idea. “These scenarios are self-refuting,” he writes, arguing they depend on the assumption that an AI is “so imbecilic that it would wreak havoc based on elementary blunders of misunderstanding.”

The divisions among the authors reflect divisions in the field. “A recent survey of AI researchers who published at the two major international AI conferences in 2015 found that 40 percent now think that risks from highly advanced AI are either ‘an important problem’ or ‘among the most important problems in the field,’” Tallinn writes in his essay in Possible Minds.

Should we wait to worry about AI safety until the other 60 percent are in agreement? “Imagine yourself sitting in a plane about to take off,” he writes. “Suddenly there’s an announcement that 40 percent of the experts believe there’s a bomb on board. At that point, the course of action is already clear, and sitting there waiting for the remaining 60 percent to come around isn’t part of it.”

It is in puzzling through this disagreement that I found myself most frustrated with the format of both books, which seem to open window after window into the minds of researchers and scientists, only to leave it to the reader to sketch floor plans and notice how the views through all of these windows don’t line up.

The format of Architects of Intelligence — a series of interviews between Ford and the experts — at least permits Ford to follow up when one expert makes a claim that another one, just a chapter earlier, rejected as ridiculous. But this gets us only shallow understandings of how they disagree. Kurzweil thinks that those who claim AGI is a hundred years off are failing to understand the power of exponential growth — thinking “too linearly.” That’s a little more insight into the root of these disagreements. But it’s all we get.

How can a field get to the point where its preeminent scholars expect its critical milestone to be hit at some point in the next 10 years — or three centuries? Perhaps it isn’t as surprising as it feels — a survey of scientists a decade before the Manhattan Project, asking when weaponized nuclear fission would first be achieved, might have produced such a wide range of guesses.

But that much uncertainty is not reassuring, and neither is the analogy to the dawn of the nuclear era. The stakes are exceptionally high here, and a reader doesn’t walk away from Possible Minds and Architects of Intelligence feeling that there’s a core group of experts who are all on the same page.

At best, it feels like we’re seeing many blind men grasping at the same elephant. At worst, we’re watching them walk right into a deadly mistake, failing to take the high uncertainty and differing expectations of their coworkers as the concerning sign that we should read it as.

Related

The case for taking AI seriously as a threat to humanity

An AI helped us write this article

The American public is already worried about AI catastrophe

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Sourse: vox.com

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *