Member-only story

AI-Generated Bullshit Is A Challenge To Our “Vigilance”

What ChatGPT has in common with magazine copy-editing

Clive Thompson
7 min readFeb 13, 2023
A “poop” emoji plushie sitting on a bed, propped up on the pillows. Sunlight pours in through a window behind the bed, and greenery can be seen, out of focus, in the distance
Photo by Portuguese Gravity on Unsplash

I once knew a copy-editor who read all her stories backwards.

She said it forced her to slow down, so she’d catch more mistakes.

Cool technique, eh?

Let’s put a pin in that for now.

I’ll come back to it, though, because it relates to something that’s been happening quite a bit lately:

Humans are being duped by AI-generated bullshit.

Last week, Google showed off “Bard”, an “experimental conversational AI service.” The chatbot is powered by LaMDA, a large language model — and Bard is basically Google’s attempt to catch up to OpenAI and ChatGPT. Google is clearly panicked that conversational AI will become a new interface for everything, and could dethrone its search engine.

To show off Bard and demonstrate how smoothly it works, Google executives posed it this question: “What new discoveries from the James Webb Telescope can I tell my nine-year-old about?”

Google proudly posted the AI’s reply on Twitter …

Screenshot of a tweet by Google, with this text: “‘**. what new discoveries from the James Webb Space Telescope can I tell my 9 year old about? Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST): • JWST took the very first pictures of a planet outside of our own solar system. These distant worlds are called “exoplanets.” Exo means “from outside.””

The problem? That third fact — “JWST took the very first pictures of a planet outside of our own solar system” — is flat-out wrong.

The astrophysicist Grant Tremblay quickly pointed this out …

A tweet by Grant Tremblay reading: Not to be a ~well, actually~ jerk, and I’m sure Bard will be impressive, but for the record: JWST did not take “the very first image of a planet outside our solar system”. the first image was instead done by Chauvin et al. (2004) with the VLT/NACO using adaptive optics.

A few weeks earlier, CNET fell into its own AI swamp. It turned out that CNET had published 77 stories written by an “internally-designed” AI tool — fully 41 of which contained errors, including in pieces that were purporting to offer personal-finance advice. Derp.

Why didn’t anyone notice these errors? I mean, we’re talking about CNET — a publication that has actual, human editors — and Google, a multi-billion-dollar firm that was proudly showing off its new AI.

--

--

Clive Thompson
Clive Thompson

Written by Clive Thompson

I write 2X a week on tech, science, culture — and how those collide. Writer at NYT mag/Wired; author, “Coders”. @clive@saturation.social clive@clivethompson.net

Responses (41)

Write a response