The “Boring Apocalypse” Of Today’s AI
I recently learned a very useful concept for thinking about our AI future.
It’s in a recent column by the New York Times writer Ezra Klein, in which he grapples with the impact of large language-models like ChatGPT.
At one point, he perceptively notes that language AI could be used as a sort of denial-of-service attack on any institution that’s predicated on human feedback.
Consider, he noted, a city that proposes a new building development, and asks residents to submit letters opposing or supporting it. A NIMBY local resident could use ChatGPT to crank out “a 1,000-page complaint” in an instant, as Klein notes.
That could really drown a thinly-staffed housing department, right? A bad thing.
Except, as Klein adds, the housing department staff might well use ChatGPT themselves — to auto-summarize the incoming letters from residents. Fighting fire with fire!
That sounds … useful? Sort of? Except now you’ve got a situation in which people are using AI to parse text that was itself cranked out by an AI.
It’s an incredibly weird and soul-deadening prospect, perhaps all the more depressing because of how entropically plausible it is. One can easily imagine everyone strapping on their AI mechas so they can attack each other with text, and defend themselves duly.
What exactly do we call this sort of grey-goo textual sludge that’s likely about to emerge?
“The Boring Apocalypse”.
That’s the brilliant term coined by the Harvard computer scientist Jonathan Frankle, who also works as the chief scientist for MosaicML, a firm that makes AI models.
When Frankle spoke to Klein, here’s how he described it ….
Jonathan Frankle, the chief scientist at MosaicML and a computer scientist at Harvard, described this to me as the “boring apocalypse” scenario for A.I., in which “we use ChatGPT to generate long emails and documents, and then the person who received it uses ChatGPT to summarize it back down to a few bullet points, and there is tons of information changing hands, but all of…