The Age of Chat

Save this storySave this storySave this storySave this story

Earlier this spring, I took the bus to the Moscone Center, in downtown San Francisco, where almost thirty thousand people had gathered for the annual Game Developers Conference (G.D.C.), which I was attending as a journalist. I had spent the previous few months out on maternity leave, and I was glad to return to work, to have meetings, to temporarily exit the domestic sphere. Participating in public life felt incredible, almost psychedelic. I loved making small talk with the bus driver, and eavesdropping on strangers. “Conferences are back,” I heard one man say, sombrely, to another. As my bus pulled away, I saw that it was stamped, marvellously, with an advertisement for Taiwanese grouper. “Mild yet distinctive flavor,” the ad read. “Try lean and nutritious grouper tonight.”

On the G.D.C. expo floor, skateboarders wearing black bodysuits studded with sensors performed low-level tricks on a quarter-pipe. People roped with conference lanyards stumbled about in V.R. goggles. Every wall or partition seemed to be covered in high-performance, high-resolution screens and monitors; it was grounding to look at my phone. That week, members of the media and prolific tech-industry Twitter users couldn’t stop talking to, and about, chatbots. OpenAI had just released a new version of ChatGPT, and the technology was freaking everyone out. There was speculation about whether artificial intelligence would erode the social fabric, and whether entire professions, even industries, were about to be replaced.

The gaming industry has long relied on various forms of what might be called A.I., and at G.D.C. at least two of the talks addressed the use of large language models and generative A.I. in game development. They were focussed specifically on the construction of non-player characters, or N.P.C.s: simple, purely instrumental dramatis personae—villagers, forest creatures, combat enemies, and the like—who offer scripted, constrained, utilitarian lines of dialogue, known as barks. (“Hi there”; “Have you heard?”; “Look out!”) N.P.C.s are meant to make a game feel populated, busy, and multidimensional, without being too distracting. Meanwhile, outside of gaming, the N.P.C. has become a sort of meme. In 2018, the Times identified the term “N.P.C.” as “the Pro-Trump Internet’s new favorite insult,” after right-wing Internet users began using it to describe their political opponents as mechanical chumps brainwashed by political orthodoxy. These days, “N.P.C.” can be used to describe another person, or oneself, as generic—basic, reproducible, ancillary, pre-written, powerless.

When I considered the possibilities of N.P.C.s powered by L.L.M.s, I imagined self-replenishing worlds of talk, with every object and character endowed with personality. I pictured solicitous orcs, garrulous venders, bantering flora. In reality, the vision was more workaday. N.P.C.s “may be talking to the player, to each other, or to themselves—but they must speak without hesitation, repetition, or deviation from the narrative design,” the summary for one of the talks, given by Ben Swanson, a researcher at Ubisoft, read. Swanson’s presentation promoted Ubisoft’s Ghostwriter program, an application designed to reduce the busywork involved in creating branching “trees” of dialogue, in which the same types of conversations occur many times, slightly permuted depending on what players say. Ghostwriter uses a large language model to suggest potential barks for N.P.C.s, based on various criteria inputted by humans, who then vet the model’s output. The player doesn’t directly engage the A.I.

Yet it seems likely that “conversations” with A.I. will become more common in the coming years. Large language models are being added to search engines and e-commerce sites, and into word processors and spreadsheet software. In theory, the capabilities of voice assistants, such as Apple’s Siri or Amazon’s Alexa, could become more complex and layered. Call centers are introducing A.I. “assistants”; Amazon is allegedly building a chatbot; and Wendy’s, the fast-food chain, has been promoting a drive-through A.I. Earlier this spring, the Khan Lab School, a small private school in Palo Alto, introduced an experimental tutoring bot to its students, and the National Eating Disorders Association tried replacing its human-run telephone helpline with a bot dubbed Tessa. (NEDA suspended the service after Tessa gave harmful advice to callers.)

Chatbots are not the end point of L.L.M.s. Arguably, the technology’s most impressive capabilities have less to do with conversation and more to do with processing data—unearthing and imitating patterns, regurgitating inputs to produce something close to summaries—of which conversational text is just one type. Still, it seems that the future could hold endless opportunities for small talk with indefatigable interlocutors. Some of this talk will be written, and some may be conducted through verbal interfaces that respond to spoken commands, queries, and other inputs. Anything hooked up to a database could effectively become a bot, or an N.P.C., whether in virtual or physical space. We could be entering an age characterized by endless chat—useful or skimmable, and bottomless by design.

Online chat is distinct from conversation or talk, and tends to have its own flow. It can be cacophonous—participants spraying one another with bursts of short text—or totally asynchronous, with souls hitting Return into the void. The stakes are usually low, and opportunities to partake are ubiquitous: one can chat in direct messages, on dating apps, in video games, on video calls, in word-processing software, and so on, to say nothing of dedicated applications such as WhatsApp or iMessage. Chat tends to adapt to the features, constraints, and conventions of a given platform. Chatting on Slack is different from chatting on Tinder. There is always more to say, and somewhere else to say it. Silence—stillness—can feel like a miracle.

In 2011, the editors of the journal n+1 offered a baroque, funny treatise on chat in an essay called “Chathexis.” In particular, they explored the unique pleasures of Gchat, that era’s discursive software of choice. “Gchat returns philosophy to the bedroom as, late at night, we find ourselves in a state of rapturous focus,” they wrote. “So many of us feel our best selves in Gchat. Silent, we are unable to talk over our friends, and so we become better and deeper listeners, as well as better speakers—or writers.” Chat, they noted, can be cozy, intimate, casual, revelatory, expansive; it also has an emotional undercurrent. “Chat’s immediacy emphasizes response, reminding us that we do not simply create and express ourselves in writing, but create and express our relationships,” the editors argued.

What chatbots offer isn’t chat, exactly—it’s a chat simulation. Some of the chummier models, such as Replika (“the AI companion who cares”) or Bing’s semiretired Sydney, can produce a rhythm that’s close to the real thing. ChatGPT encourages anthropomorphism, even with its dry, intentionally mechanical “personality”: it issues apologies when it makes a mistake, returns text in the first person, and has a typing-awareness indicator—a line of dots, climbing from one to three and back again, that suggests thoughtful, halting composition on the part of the computer. (Kevin Munger, a political scientist at Penn State University, has proposed regulating the use of first-person pronouns, to “limit the risk of people being scammed by LLMs, either financially or emotionally.”) But ChatGPT’s prose style remains almost uniformly stiff and elliptical; although it can be prompted to rephrase its utterances in different emotional registers and affects, using it still feels like conversing with equations. Compared with the immediacy of my own Gchats with friends—“omg american apparels website is out of control right now,” one friend wrote, in 2011—ChatGPT offers data-processing masquerading as conversation, a server farm humming at the frequency of speech.

By making L.L.M.s conversational, their developers enlist human interlocutors in training and refining the software. But adding conversational interfaces to consumer-facing L.L.M.s is also an appeal to familiarity, to existing habits and activities. “Chat” evokes what search engines and databases cannot: a sense of personal involvement. It implicates one’s selfhood, which helps cultivate certain behaviors. Earlier this year, in a Medium post titled “Who are we talking to when we talk to these bots?,” Colin Fraser, a data scientist at Meta, wrote that ChatGPT’s “chat-shaped interface . . . guides the user towards producing inputs of the desired type.” When users stray from their intended role, he went on, the L.L.M. is liable to deliver undesirable output—sentences, or sentence fragments, that betray the “mindless synthetic text generator” operating beneath the surface. “A big reason that OpenAI needs you to keep your inputs within the bounds of a typical conversational style is that it enables them to more effectively police the output of the model,” Fraser went on. “The model only acts remotely predictably when the user acts predictably.”

With today’s chatbots, human users are not really speaking; they are prompting. Prompting, in this context, is the term used to describe deliberately prodding or nudging the software toward specific outcomes. OpenAI’s documentation has a page on “best practices for prompt engineering” with the company’s A.P.I.; these include steering away from negation (“Instead of just saying what not to do, say what to do instead”), and offering details about the “context, outcome, length, format, style,” and tone of the desired response. Some companies working on A.I. products have hired “prompt engineers”—people who develop and document fruitful prompts, or sequences of prompts. A job listing for a “prompt engineer/librarian,” posted by the company Anthropic, whose chatbot, Claude, is marketed as “helpful, honest, and harmless,” describes the role as “a hybrid between programming, instructing, and teaching.”

If certain prompts produce higher-quality data—content that is more legible, thorough, and sometimes more accurate—then prompt design becomes its own form of literacy. Earlier this spring, the Times published an instructional article, “Get the Best from ChatGPT with These Golden Prompts,” that advised using the phrase “act as if” to guide chatbots to “emulate an expert.” Users, of course, are also acting “as if.” They, too, must engage in acts of emulation—playing along, chatting in ways that are computationally friendly, suspending any disbelief about the expertise of predictive text. High-quality inputs are rewarded with high-quality outputs; the software is a kind of mirror. What’s happening is data exchange between user and bot—but it is also a mutual manipulation, a flywheel, an ouroboros.

At G.D.C., during a break between meetings, I went around the corner to the Metreon, a mall, to pick up lunch at a fast-casual Vietnamese restaurant. I had frequented the restaurant regularly about a decade ago, when I was twenty-five and working in customer support at a startup nearby. Back then, I spent my days writing e-mails that said things like “I’d love to reproduce this error for myself” and “Let me know if that helps!” In those years, Tony Hsieh’s book “Delivering Happiness” was popular, and there was a lot of talk about how to administer “surprise and delight.” It was strange to think about a future in which this work might be completely automated—in which L.L.M.s, rather than liberal-arts graduates, would be tasked with transmogrifying frustration and human error into something useful and charming.

The restaurant looked about the same as I remembered, but with more screens. Bowls are an intrinsic part of cuisines all over the world, but in corporate lunch culture a “bowl” also signifies drop-down-menu food—food reduced to first principles—and people wearing G.D.C. lanyards stood behind two tablets, waiting to select bases, proteins, sauces, and toppings. On the other side of the tablets, a line of employees transferred fistfuls of rice noodles from cold metal troughs to compostable containers. When it was my turn, I quickly selected the allotted number of components, almost at random, and immediately felt remorse. As the workers assembled my bowl, I read Yelp reviews of the restaurant. “For outright value, you just can’t beat the bowl,” one reviewer had written. “They really stuffed the bowl,” wrote another. “My bowl was brimming to the top!” a third claimed.

I have always been fond of Yelp, not as a service but as a literary corpus, documenting nearly twenty years of sociocultural desire and thwarted expectations—a chronicle of a generation’s pursuit of optimized experience. The site, which was founded in 2004, is a perfect artifact of Web 2.0, a version of the Web that produced new styles of user-generated content: tweets, shitposts, comments, memes, reviews—forms of public writing with their own conventions, shorthand, and lexical tics, each unique to their platform. (Yelp: “I would give zero stars if I could”; “I am very surprised by the rating”; “Really a 3.5, but rounded down because of presentation.”) It was almost poignant that many large language models were trained on content from Web 2.0. Those corporate platforms, and the text that animates them, seemed quaint and homespun by comparison.

I wondered whether chatbots and other natural-language interfaces would produce new shapes of conversation, new forms of talk, new types of content. Just as Yelp affected the way its users thought about certain offline experiences—a trip to the chiropractor, the customer service at a gas station, the size of a lunch bowl—L.L.M.s had the potential to affect the way people sought and processed information. Already, despite generating text riddled with factual errors, chatbots were being positioned as information-gathering tools. How might they affect the expectations users have about knowledge, or their attitudes toward expertise and authority?

Carrying my bowl, I found a seat in a plaza adjacent to the Moscone Center. People leaned against the building with cigarettes, or gathered in small groups, vaping. The air was fragrant, polluted and evocative, and my face grew warm in the sun. I poked at a matchstick of jicama, nostalgic and happy, consuming the lunch option of my youth. I thought back to working in customer support: how I would occasionally repair to the server room to take video calls with customers; how they sometimes seemed surprised to meet the person on the receiving end of their e-mails; how the tone would shift. Transactional conversation—speaking, and being spoken to, like a bot—can be efficient, maybe even nice, depending on the context and on your disposition. But it can also feel condescending, flattening, manipulative, and generic—like being treated as an N.P.C. I ate my Proustian lunch bowl. The Yelp reviewers were right: the bowl was large.

Back inside the Moscone Center, after another saunter across the expo floor, I took an escalator up to the lobby, turned several corners, walked down a flight of stairs, and found two “lactation pods” in what appeared to be the basement. The pods were freestanding structures, manufactured by a company called Mamava, and looked like teardrop trailers. “Hello, Mamas!” a welcome note, printed on the back of the door, read. “Relax. You deserve it.” A small plastic plant sat on a ledge, next to a mirror, and I checked my reflection: turtleneck, backpack, conference lanyard, tired eyes. “Looking good mama!” a decal running along the bottom of the mirror chirped. It was all too cozy, a little debased. Talk was feeling very cheap.

Language can contain an entire world, revealing its speakers’ history, values, or pathologies. It can also be obfuscating, diversionary, slippery. Chattiness, with its personality-driven appeals to familiarity, can conceal or elide false promises, banality, emptiness, controversy, and the context of its own existence. (In 2022, Taiwanese grouper was banned by China, its primary market, leaving its producers in dire need of new consumers.) In this vein, simulated chat obscures the reality of what it takes to create, train, update, and maintain large language models, which are, at least for now, hugely expensive and resource-intensive. It is a tremendous undertaking to make computing more personal and intimate: behind every chatbot is a server farm, or several. Prompting a large language model to call up and arrange data involves activating a vast network. Chatbots, for all their ostensible personalization, are in the business of mass production.

All of this infrastructure buttresses a fantasy. Technologists have long dreamed of having interpersonal relationships with programs. Recently, Sam Altman, the C.E.O. of OpenAI, reminisced to the Wall Street Journal about being a child, peering into his Macintosh, and having the “sudden realization” that “someday, the computer was going to learn to think.” (The Journal’s use of the word “realization” suggests fact, rather than conjecture; it’s not yet clear whether L.L.M.s, or subsequent technologies, will be able to “think” in any recognizable or meaningful way.) Last week, the venture capitalist Marc Andreessen published an essay in which he envisioned a world of empathetic, well-informed, motivational bots, “maximizing every person’s outcomes” and working alongside artists, scientists, heads of state, and children. “Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful,” Andreessen wrote. “The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.”

When I sat alone, at home, looking into the expectant portal of ChatGPT, I wondered how far even the most sophisticated synthetic fondness, or “intelligence,” could take a technology like this—whether it could ever feel transportive or trustworthy. Maybe the future did hold infinite knowledge and infinite love, or at least the “machine versions” of those things, manufactured, marketed, and sold by corporate monopolies and venture-funded tech companies. Meanwhile, around the edges of my screen, notifications flickered and slid; my phone buzzed as my in-box expanded with headlines, work logistics, personal news, banter, commiseration, gossip. In the near term, the future seemed to hold infinite chat, not between friends, or even strangers, but with server racks in Altoona and Ashburn—a world of kaleidoscopic interfaces waiting to be prompted, ready to say just what users wanted to hear. ♦

Sourse: newyorker.com

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *