Warning Labels For AI-Generated Text

Maybe we should just start sticking icons on this stuff

Clive Thompson
5 min readOct 31


Feel free to copy and use this logo I made! Just link back to this Medium post as attribution — I’m issuing this logo under the CC BY-SA 4.0 license

So, there’s a ton of AI-generated text in our lives.

Back in January, a Fishbowl survey of 4,500 professionals found that 30% were using ChatGPT at work; ten months on, I’d imagine that number could be higher. Perhaps a fifth to a third of college students are using it, depending on whether these surveys hold water. Newsguard is finding more and more content-mill websites filled with AI-generated text (i.e. “Can lemon cure skin allergy?”); one site they tracked pushed out 1,200 articles a day. I’ve seen quite a few Medium comments on my posts that carry the dusty existential odor of a large language model. (Typically, they’ll just blandly summarize what my post says, in a short paragraph, and add no more; their author has a Medium account filled with equally bland posts covering a seemingly random selection of subjects.)

That’s only the visible part of the AI iceberg, mind you! Beneath the surface, plenty of folks using ChatGPT or other large-language models as writing aids — i.e. to generate first draft of an email, a presentation, a post, which they’ll use as inspiration or a template for their own writing.

So this is the world we’re in now!

Me, I’m on the record having several reservations, or at least observations, about this.

The big one for me is problem of bullshit. Large-language models autocomplete text based on the most likely phrases and words, given the context. But they don’t seem to grasp semantics, or, what words really mean. So they wind up blending bits of complete nonsense alongside correct, factual stuff. This is, as I’ve written, Harry Frankfurt’s definition of “bullshit”: Prose written purely to sound breezily confident and keep the dialogue going, with no concern for whether it’s factually tethered to reality.

(This bullshit problem is why I’ve never used to use LLM AI to, say, summarize documents for me. I initially thought this would be useful! But I need summaries to be reliably factual, and when I’ve tried to use AI for this task, they just aren’t. I’ve had much more luck using AI in coding — say, getting ChatGPT (or Copilot) to identify complex CSS selectors in a given chunk of HTML, or…



Clive Thompson

I write 2X a week on tech, science, culture — and how those collide. Writer at NYT mag/Wired; author, “Coders”. @clive@saturation.social clive@clivethompson.net