“You Know Nothing Of My Work, ChatGPT”

I asked it about my journalism, and it got nearly every fact wrong

Clive Thompson

--

A red road sign reading “WRONG WAY”
Photo by Kenny Eliason on Unsplash

It’s now pretty well-known that large language models have trouble getting facts right.

The folks who design these models like to say that their AI has the tendency to “hallucinate”, though personally I prefer to say that it’s a tendency to “bullshit”. Tools like ChatGPT breezily mix together verifiable facts with utterly made-up crap, then delivers it with gladhanding Silicon Valley overconfidence.

This is why, as I’ve written, AI tools are absolutely wonderful if you need to generate text by the shovelful, and are unencumbered by the need to be factually correct. Or as I put it last December …

It is probably no accident that the industries who’ve most enthusiastically adopted “AI generated content” are the ones where bullshit — human authored bullshit — is historically common: Content marketing, PR, certain tech firms, and the more brackish, clickbaity tide-pools of blogging and journalism.

But hey! Technology marches ever forward, so who knows? Maybe, in the intervening months since I wrote that dismal little appraisal, ChatGPT has gotten more accurate. OpenAI has been issuing updates to the model, right?

--

--

Clive Thompson
Clive Thompson

Written by Clive Thompson

I write 2X a week on tech, science, culture — and how those collide. Writer at NYT mag/Wired; author, “Coders”. @clive@saturation.social clive@clivethompson.net