Save this storySave this storySave this storySave this story
Universities may have different expectations: great basketball, cultural centers, qualified teaching of philosophy or physics, even finding a cure for cancer. It is not surprising that these institutions try to satisfy all demands.
But as it turns out, no one is happy. The Trump administration has effectively launched an open assault on higher education, slapping down deep cuts in federal grant funding. University presidents are worried, as are faculty and anyone interested in the broader role of universities.
As a historian of science and technology, I study the evolving role of universities, from their medieval, spiritual origins to their modern entrepreneurial engines of R&D. I teach in the humanities, and my courses are embedded in traditional liberal arts programs, with the hope of creating people ready to face the challenges of freedom. But my field is the growth of scientific and technological understanding of the world and our place in it. And if that excites you, the White House’s actions look, frankly, insignificant. The real movement hurtling across campus is AI, which is approaching at astonishing speed.
Let me share an observation from the defeat zone. When I first asked thirty Princeton undergrads from twelve majors if they had used AI, not a single hand was raised. The same was true among my graduate students. Even after several enthusiastic nudges (“Hey! I use these tools! They’re awesome! Let’s talk about it!”), I had no success.
Not because they’re dishonest, but because they’re paralyzed. As one quiet young woman explained after class, almost every program now has a warning: Using ChatGPT or similar tools can result in deans being notified. No one wants to take the risk. One student mentioned that a major AI site might even be blocked from the university network, though she was too scared to check the claim.
At one campus department, a recently drafted anti-AI policy, if read literally, would have effectively banned faculty from assigning AI-related assignments to students (it was eventually changed). Last year, when several distinguished alumni and other notables conducted an external review of the history department, the top recommendation was to urgently address the looming AI abuses in our teaching and research. This proposal was received rather coolly. But the idea that we can just carry on with our day won’t cut it either.
Quite the contrary: stunning transformations are happening at breakneck speed. Yet we find ourselves in a strange in-between on campus: everyone seems to want to pretend that the most significant revolution in thinking in a century isn’t happening. The approach seems to be, “We’ll just tell students they can’t use these tools and carry on as before.” This is simply insane. And it can’t last. It’s time to discuss what all this means for university life, and for the humanities in particular.
Let’s start with the power of these systems. Two years ago, one of my computer science students used a beta model to train a chatbot on about 100,000 words of course material from several of my courses. He sent me the interface. The feeling of asking questions about my own subject was uncanny. The answers weren’t from me, but they were good enough to get my attention.
Before heading to a fintech startup, this student urged me to sign up for OpenAI’s $200-a-month turbo platform. The service, which the company was operating at a loss at the time, offers a level of analysis, insight, and creative thinking that makes the tipping point unmistakably clear.
Example: I recently attended a scholarly lecture on a rare illuminated manuscript. The speaker was as brilliant as it gets, but his talk was hard to follow. Frustrated, I opened ChatGPT and began asking him questions about the topic. Over the course of that frustrating lecture, I had a rich exchange with the system. I learned what was and wasn’t known about the document, who had done the underlying research, and how scholars had interpreted its iconography and transmission. Was the information perfect? Of course not, but then what we get from humans isn’t perfect either. Was it better than what I had heard? Absolutely.
Increasingly, machines are outperforming us in almost every subject. Yes, you will hear true scholars explain that DeepSeek can’t reliably distinguish Karakalpak from the neighboring Kipchak-Nogai dialects (or whatever). To me, it’s like pointing out the daisies along the train tracks while a real locomotive is screaming behind you. I am a book reader and writer, trained in an almost monastic devotion to canonical scholarship in the disciplines of history, philosophy, art, and literature. I’ve been doing this work for more than thirty years. And already the thousands of academic books lining my offices are beginning to feel like archaeological artifacts. Why turn to them to answer a question? They are so strangely inefficient, so bizarre in
Sourse: newyorker.com