On Bullshit, And AI-Generated Prose
There’s a word for prose and speech that exist mostly just to fill the space
AI text-generation tools have been getting into a bit of trouble lately.
Recently, OpenAI released “ChatGPT”, a quite remakable chatbot. It’s built atop GPT-3, OpenAI’s language model that’s very fluent at autocompleting and summarizing text. Denizens of the Internet quickly discovered the fun of getting ChatGPT to do weirdly creative tasks — like rewriting “Baby Got Back” in the style of Chaucer, or creating text games on the fly, or writing a sonnet about string cheese. I myself spent an evening getting ChatGPT to generate radio plays of famous historical figures arguing about what to have for dinner. It was pretty delightful!
But the problems begin when you require ChatGPT to be factually accurate.
When it comes to facts, the AI sometimes flies off the rails spectacularly. When the biology professor Carl T. Bergstrom asked ChatGPT to write a Wikipedia entry about him, it got basic dates of his career wrong, said he’d won awards he hadn’t, and claimed he held a professorship that doesn’t even exist. When Mike Pearl asked it what color were the uniforms of Napoleon’s Royal Marines, it utterly muffed it. (And OpenAI wasn’t the only AI running afoul of facts. A few weeks ago, Meta…