Are We Living in the Age of Info-Determinism?

Save this storySave this storySave this storySave this story

In the early two-thousands, Martin Gurri, a media analyst at the Central Intelligence Agency, began considering the political implications of the Internet. Gurri worked in the Open Source Center, a part of the C.I.A. tasked with analyzing publicly available information, such as newspapers, magazines, and reports. With the advent of the Web, focussing exclusively on such sources had begun to feel old-fashioned. Vast numbers of people were writing online, and the ideas that they shared could tank stocks, sway elections, or spark revolutions. “I realized that I couldn’t restrict my search for evidence to the familiar authoritative sources without ignoring a near-infinite number of new sources,” Gurri later wrote. “I was left in a state of uncertainty—a permanent condition for analysis under the new dispensation.”

In 2014, Gurri described the consequences of this uncertainty in a self-published book called “The Revolt of the Public and the Crisis of Authority.” (An updated edition appeared in 2018.) In the old days, he argued, it had been possible to read a newspaper or watch a newscast and feel that you’d got a good grasp of “the news.” The Internet, however, created the sense that there was always more to know—and this was “an acid, corrosive to authority.” Now “every presidential statement, every CIA assessment, every investigative report by a great newspaper, suddenly acquired an arbitrary aspect, and seemed grounded in moral predilection rather than intellectual rigor.” Meanwhile, because everyone could read only a slice of the Internet, the traditional mass audience was splitting into “vital communities”—“groups of wildly disparate size gathered organically around a shared interest or theme.” These communities, Gurri thought, had a characteristic mood: they revelled in the destruction of received opinion and the disassembly of arguments from authority. “Every expert is surrounded by a horde of amateurs eager to pounce on every mistake and mock every unsuccessful prediction or policy,” Gurri wrote. And yet, “the public opposes, but does not propose.” Demolishing ideas is easy in a subreddit; crafting new ones there is mostly beside the point.

The way those in power responded to these dynamics was troubling. Their general strategy, Gurri thought, was to wish that the Internet and its “unruly public” would go away, and that the halcyon days of authoritative hierarchy would return. Leaders lectured Internet users about media literacy and pushed for the tweaking of algorithms. Internet users, for their part, grew increasingly uninterested in taking leaders, institutions, and experts seriously. For more and more people, a random YouTuber seemed preferable to a credentialled expert; anyone representing “the system” was intrinsically untrustworthy. As the powerful and the public came to regard one another with contempt, they created “a perpetual feedback loop of failure and negation,” Gurri wrote. Nihilism—“the belief that the status quo is so abhorrent that destruction will be a form of progress”—became widespread. It could be expressed substantively (say, by rioting in the Capitol) or discursively, by asserting your right to say and believe anything you want, no matter how absurd.

I first read “The Revolt of the Public” in 2016, after Donald Trump won the Presidency, because many bloggers I followed described it as prescient. I disagreed with Gurri, who has a libertarian sensibility, on many points, including the character of the Obama Presidency and the nature of the Occupy movement, and felt that the book downplayed the degree to which the American left has remained largely allied with its institutions while the right has not. But I also found its analysis illuminating, and I’ve thought about the book with drumbeat regularity ever since. Recently, a friend told me about a relative of his who maintained that some public schools had installed “human litter boxes” for the convenience of students who “identify as cats.” “How could he seriously believe something like that?” my friend asked. Remembering Gurri, I wondered if “believing” was the wrong concept to apply to such a case. Saying that you believe in human litter boxes might be better seen as a way of signalling your rejection of discursive authority. It’s like saying, “No one can tell me what to think.”

How can a society function when the rejection of knowledge becomes a political act? Gurri offers a few suggestions, most aimed at healing the breach between institutions and the public: government agencies might use technology to become more transparent, for example, and disillusioned voters might adopt more realistic expectations about how much leaders can improve their lives. Yet the main goal of his book isn’t to fix the problem (it may not be fixable); he just wants to describe it. Short of some new and vast transformation, it’s hard to see the Internet becoming a venue for consensus-building; similarly, it’s difficult to imagine a world in which citizens become reënchanted with the media and return to trusting authority figures. “All things equal, the system will continue to bleed away legitimacy,” he concludes. “The mass extinction of stories of legitimacy leaves no margin for error, no residual store of public good will. Any spark can blow up any political system at any time, anywhere.”

A decade ago, when Gurri published “The Revolt of the Public,” the salient change in the world of information was a startling increase in the number of human voices empowered to speak at once. Yuval Noah Harari, in his new book “Nexus: A Brief History of Information Networks from the Stone Age to AI,” looks forward to the next few decades, when many of the voices we encounter online may be automated. “What we are talking about is potentially the end of human history,” he writes. “Not the end of history, but the end of its human-dominated part.” A.I. systems could quickly “eat the whole of human culture—everything we have created over thousands of years—digest it, and begin to gush out a flood of new cultural artifacts.” He goes on:

We live cocooned by culture, experiencing reality through a cultural prism. Our political views are shaped by the reports of journalists and the opinions of friends. Our sexual habits are influenced by what we hear in fairy tales and see in movies. Even the way we walk and breathe is nudged by cultural traditions, such as the military discipline of soldiers and the meditative exercises of monks. Until very recently, the cultural cocoon we lived in was woven by other humans. Going forward, it will be increasingly designed by computers.

For us to grasp where these developments might take us, Harari believes it’s helpful to adopt a novel definition of “information.” We’re used to thinking of information as being representational—that is, a piece of information represents reality, and might be true or false. But another way to look at information is to see it as a “social nexus” capable of putting people into “formation.” From this perspective, it doesn’t matter whether information is true or not. The Bible has shaped the course of history because the stories it contains have persuaded billions of people to coöperate. Bureaucratic records describe only limited aspects of our lives, but they have created relationships between governments and citizens. Taylor Swift’s songs have conjured Swifties out of the mass of humanity. When new information becomes available, new social relationships spring up.

What will happen when A.I. systems begin pulling people into formation? We can get a glimpse of the possible consequences by looking at what’s already happened on the pre-A.I. Internet. Harari cites a 2022 study, conducted by the digital-intelligence firm Similarweb, which showed that between twenty and thirty per cent of the content on Twitter was posted by bots, which in turn constituted only five per cent of that platform’s user base. It’s no stretch to say that a platform like Twitter is in itself a kind of bot; its algorithms decide, in an automated fashion, what users should see. On such a platform, therefore, swarms of bots interact with a mega-bot, while human beings read and respond alongside. If this phenomenon were amplified—and if the bots and algorithms were capable of holding intelligent conversations—the likely outcome is “digital anarchy,” as Harari puts it. Conversations among machines will shape conversations about humans. “The public sphere will be flooded by computer-generated fake news, citizens will not be able to tell whether they are having a debate with a human friend or a manipulative machine, and no consensus will remain about the most basic rules of discussion or the most basic facts.”

To prepare for this possible world, Harari advocates the development of a robust “computer politics,” through which democratic societies might safeguard their public spheres. Among other things, he argues, we should ban the impersonation of people by computers and require A.I. systems to exercise a fiduciary duty toward their users. Regulatory agencies should be charged with evaluating the most important algorithms, and individuals should have a “right to an explanation” when A.I. systems make decisions that shape their lives. And yet he admits that, even if such reforms are put into place, there will be reasons to doubt “the compatibility of democracy with the structure of twenty-first-century information networks.” Democracy on a small scale is easy; it’s no problem for the members of a club or the residents of a small town to elect a new leader or mayor. But democracy on a mass scale depends on mass institutions—mass media, mass education, mass culture—that seem likely to fracture or mutate with the arrival of A.I. The forms of government that flourished in one info-epoch may not thrive in the next.

Call it info-determinism: the belief that the ways that information flows through the world are actually a kind of web in which we’re ensnared. One reason to take this view seriously is that it’s actually pretty old. In 1999, in a novel called “All Tomorrow’s Parties,” the novelist William Gibson imagined a character reflecting on the fluidity of things in a world of unlimited information:

He had been taught, of course, that history, along with geography, was dead. That history in the older sense was an historical concept. History in the older sense was narrative, stories we told ourselves about where we’d come from and what it had been like, and those narratives were revised by each new generation, and indeed always had been. History was plastic, was a matter of interpretation. The digital had not so much changed that as made it too obvious to ignore.

The key step is the last one. As the density, pace, and fluidity of information have increased, we’ve become more conscious of the role it plays in our lives—and more suspicious of it.

“All Tomorrow’s Parties” was near-future science fiction: the trilogy of novels it concluded begins sometime around 2006. In real life, 2006 was the year that Twitter launched, and in which Facebook opened itself to people who weren’t college students and created its news feed; it was also the year in which Google bought YouTube, and in which Time magazine’s “Person of the Year” was “You”—the online individual, which, massed together, made for “the many wrestling power from the few.” “We are so ready for it,” the novelist Lev Grossman wrote, in that issue. “We’re ready to balance our diet of predigested news with raw feeds from Baghdad and Boston and Beijing. You can learn more about how Americans live just by looking at the backgrounds of YouTube videos—those rumpled bedrooms and toy-strewn basement rec rooms—than you could from 1,000 hours of network television.” Back then, info-determinism was exciting. Today, it feels like a challenge which we must surmount, or else. ♦

Sourse: newyorker.com

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *