One Weird Trick To Make Humans Think An AI Is “Sentient”

Vulnerability.

Clive Thompson
7 min readJun 13, 2022

--

“El Buebito” by Gamaliel Espinoza Macedo

By now you may have read the viral Washington Post story about “The Google engineer who thinks the company’s AI has come to life”.

If you haven’t, go read it! It’s quite fascinating. The tl;dr is that Google engineer Blake Lemoine became convinced that LaMDA — one of Google’s massive language models designed for conversation — possessed consciousness. He wound up so worried that Google was unfairly treating this conscious AI that he took the issue to his superiors. When they were unconvinced, he posted the message “LaMDA is sentient” to an internal machine-learning mailing list, contacted a member of the House Judiciary Committee, and went public with his claims. He’s currently on paid administrative leave at Google.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told Nitasha Tiku of the Washington Post. (You can read a compilation Lemoine made of his conversations here.)

Before we go any further, let me say that Lemoine is almost certainly wrong. Today’s huge language-models are not, I think, anywhere near sentient. They’re exceptionally good at mimicking conversation! But they do this purely with patterning-matching and…

--

--

Clive Thompson

I write 2X a week on tech, science, culture — and how those collide. Writer at NYT mag/Wired; author, “Coders”. @clive@saturation.social clive@clivethompson.net