One Weird Trick To Make Humans Think An AI Is “Sentient”
Vulnerability.
--
By now you may have read the viral Washington Post story about “The Google engineer who thinks the company’s AI has come to life”.
If you haven’t, go read it! It’s quite fascinating. The tl;dr is that Google engineer Blake Lemoine became convinced that LaMDA — one of Google’s massive language models designed for conversation — possessed consciousness. He wound up so worried that Google was unfairly treating this conscious AI that he took the issue to his superiors. When they were unconvinced, he posted the message “LaMDA is sentient” to an internal machine-learning mailing list, contacted a member of the House Judiciary Committee, and went public with his claims. He’s currently on paid administrative leave at Google.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told Nitasha Tiku of the Washington Post. (You can read a compilation Lemoine made of his conversations here.)
Before we go any further, let me say that Lemoine is almost certainly wrong. Today’s huge language-models are not, I think, anywhere near sentient. They’re exceptionally good at mimicking conversation! But they do this purely with patterning-matching and sequence-prediction. (When tested for reasoning, they break pretty quickly.) No-one is sure what consciousness truly is — scientists and philosophers still argue over this — and it’s by no means clear that pattern-matching alone could create it. Frankly, we still don’t know what produces consciousness in humans. Could we one day create truly conscious AI? Possibly: I never say “never”. But for now, all we’ve got are rilly fluent chatbots.
But what fascinates me about this story isn’t the question of whether LaMDA is sentient. It isn’t.
The truly interesting question is …
… why Lemoine became convinced that LaMDA was sentient.
One big reason?