Save this storySave this storySave this storySave this story
Contrary to what many of my friends believe, good scientists are always busy—at least in the sense that when we’re faced with a problem, as we often are, it’s hard to get distracted. A good problem is a mind worm: It stays with you until it’s solved or replaced by something else. My Dartmouth colleague Luke Chang, a neuroscientist who studies what goes on in people’s minds when they communicate, knows this problem all too well. One day, on a long drive back to Hanover, he realized he was being occupied with one. The drive down I-89 is usually uneventful—a straight northbound route, perfect for letting your mind wander. But Luke’s mind was hooked on a technical problem: how to transform a decent model of facial expression into something truly compelling. The goal was to encode the nuances of how human faces convey emotion, then visualize them; Smiles and frowns are just the beginning. The spectrum of human emotion and intent is expressed in a series of facial expressions that serve as a basic alphabet for communication. He tried to integrate the “action units” of faces into his program. But visualization was proving difficult. Instead of realistic faces, his code consistently produced cartoonish images. All his recent attempts had failed, and it was driving him crazy.
Years ago, Luke would probably have struggled with this problem alone for the entire journey. This time, he decided to discuss it with his new partner: ChatGPT. They talked for an hour. Luke took apart his model and described what was going wrong. He asked questions, thought about solutions. ChatGPT, as always, was optimistic, tireless, and, most important, unfazed by failure. He proposed options. He asked questions of his own. Some directions were promising, others were dead ends. We sometimes forget that a machine is not so much an oracle as a conversationalist. The exchange was not rushed; it was something more organized: man and machine wading through the fog together. Eventually, ChatGPT suggested that Luke study a technique known as “disentanglement,” a way of simplifying mathematical models that had become unwieldy. The term struck a chord with Luke. “And then he started explaining everything to me,” he recalled. “I thought, 'Oh, this is really interesting.' And then I said, 'Okay, tell me more about it — conceptually and, actually, how would I implement this untangling? Can you just write some code?'”
It was possible. And it was. When Luke returned to the office, the code was waiting for him in the chat. He copied it into his Python script, hit run, and headed off to a lunch meeting. “It was so much fun learning a new concept, implementing it, and iterating on it,” he told me. “I didn’t want to wait. I just wanted to talk about it.” And did it work? Yes, it did. “It was such a wonderful feeling for me,” he said. “I feel like I’m accelerating in less time, accelerating my learning and creativity, and enjoying my work in a way I haven’t in a long time.” That’s what a good collaborator can do—even if, these days, that collaborator is a machine.
Much has been said about the disruptive impact of generative AI on academic life. As a professor of mathematics and computer science at Dartmouth, I hear these concerns firsthand. This is just the latest troubling chapter in a long history of inventions designed to aid thinking. These tools are rarely welcomed with open arms. “Your invention will enable them to hear many things without proper instruction, and they will imagine that they know many things when they know nothing. And they will be difficult to get along with, since they will only appear wise when they are not.” That’s from Plato’s Phaedrus, where Socrates sympathetically makes his case against the insidious technology of writing. It could have been written yesterday, as a warning against generative AI, by any of my colleagues.
The academy moves slowly, perhaps because the basic equipment of its workers, the brain, has changed little since we first began teaching. Our job is to interpret these vague concepts called “ideas,” in the hope of achieving a clearer understanding of something, anything. Sometimes this knowledge goes out into the world and ruins everything. Most of the time, though, the attitude that “if it ain’t broke, don’t fix it” prevails. Socrates’s anxiety reflects a deep-seated mistrust of new ways of knowing. He was far from the last scholar to believe that his generation’s method was the right one. For him, real thinking took place only through living dialogue; memory and conversation were everything. Writing, he believed, would undermine this: it would “cause forgetfulness,” and, worse, it would rip the words from the speaker, preventing real understanding. Later, the Church expressed similar concerns about the printing press. In both cases, one wonders whether the scepticism was not fueled by underlying fears about job security.
Sourse: newyorker.com