Save this storySave this storySave this storySave this story
My relatives own a compact two-room beach bungalow. It’s part of a condominium that hasn’t changed much in fifty years. The units are connected by brick walkways that wind through palm trees and teak awnings to the beach. Large hotels and condos were built nearby, and it always seemed inevitable that the bungalow would be torn down. But it never happened, probably because the association’s rules require eighty percent of the owners to agree to sell the property. Eighty percent of the people rarely agree to anything.
However, the developer recently made some progress. He offered to buy a few units at seemingly high prices; after some owners expressed interest, he offered a higher-than-expected amount for the entire complex. Enough people became open to the idea of a mass sale that it suddenly seemed feasible. Was the deal a good deal? How would negotiations proceed? The owners, unsure, began arguing among themselves.
As a favor to my mother-in-law, I shared the whole situation with OpenAI’s ChatGPT 4.5, a version of the company’s AI model available in Plus and Pro tiers that significantly outperforms cheaper and free versions on some tasks. The Pro version, which costs $200 a month, includes a feature called “deep exploration,” which lets the AI spend extended periods of time—sometimes up to half an hour—exploring online data and analyzing it. I asked the AI to evaluate the proposal; within three minutes, it delivered a detailed report. Then, over the course of several hours, I repeatedly asked it to revise the report to take into account my additional questions.
The offer was too low, AI said. His research found nearby properties that sold for higher prices. In one case, the property had been “improved” by the new owners after the sale, increasing the occupancy; that meant the property was worth more than the transaction price. Negotiations, in turn, would be tricky. I asked AI to consider a scenario in which the developers purchased more than half the units, giving them control of the condo board. He predicted they could impose onerous new rules or fees that could encourage more original owners to sell. And yet, AI noted, this could also be a point of vulnerability for the developers. “They’d own half of a condo that’s not up for renovation, which means their investment is in limbo,” he said. “The bank that’s financing their buyout would be nervous.” If at least twenty-one percent of owners do not agree, they will be able to force developers to “drain funds” and increase their offer.
I was impressed, and forwarded the report to my mother-in-law. A real estate lawyer could provide a better analysis, I thought, but not in three minutes and for two hundred dollars. (The AI’s analysis had a few errors—it initially overestimated the size of the property, for example—but it quickly and accurately corrected them when I pointed them out.) At the time, I also asked ChatGPT to explain a scientific field I was planning to write about; to help me set up an old computer for my six-year-old so he could program his own robot; and, as an experiment, to write a fan fiction based on a profile I had created of Geoffrey Hinton, the “godfather of AI” (“Reporter Josh had left earlier that day, waving from the departing boat…”) But the apartment advice was different. The AI had helped me with a real, pressing, not hypothetical, financial problem. It might even have been worth the expense. It had demonstrated a certain practicality—a level of street savvy—that I had, perhaps naively, associated with first-hand human experience. I’d been following AI closely for years; I knew these systems could do much more than just real estate research. Still, this was both an “aha!” moment and an “oh” moment. It’s here, I thought. It’s really happening.
Many people don’t realize how seriously to take AI. This can be difficult to understand, both because the technology is new and because of the hype surrounding it. It’s reasonable to be skeptical of advertising claims, since the future is unpredictable. But the backlash that arises as a kind of immune response to advertising doesn’t necessarily clarify the situation. In 1879, the Times published a serial article about the light bulb, titled “Edison’s Electric Light—Controversial Claims of Its Usefulness.” In a section offering “the scientific view,” the paper quoted a distinguished engineer, the president of the Stevens Institute of Technology, who “expressed protest against the celebration of the results of Edison’s experiments in electric lighting as ‘a remarkable success.’” He wasn’t unreasonable: inventors had been unable to create working light bulbs for decades. In many other cases, his skepticism was justified.
The hype around AI has given rise to two types of counter-hype. The first argues that the technology will soon reach a plateau: AI may continue to struggle with prediction and logical thinking, rather than intuitive thinking. According to this theory, further breakthroughs will be needed before we reach what is described as “artificial general intelligence,” or AGI — roughly human-level intelligence.
Sourse: newyorker.com