I was at the movies two nights ago and a trailer came on for M3GAN, a movie that is being billed as “AI Robot Horror”. I won’t give away any more than that trailer gives away, but basically it’s the tale of a robot that’s designed as a friend for a traumatized child, but which goes rogue and very kill-y in pursuit of its mission to protect the child.
Judging by the reaction to the trailer online, people are understandably unsettled by the prospect of AI that gets self-aware, develops volition, and won’t turn off when you tell it to. And which starts, like, slaughtering people.
Sometimes people ask me, “hey, you’re a tech journalist who’s been following AI for decades — is it possible an AI could really do that? Develop some sort of will and desires? And then start picking off humans, one by one?”
My answer is two fold:
Cognitively? M3GAN’s a fantasy, of course. At the moment, no tech company is remotely close to producing AI that can process information like M3GAN. Hell, Tesla can’t even make cars that reliably do left turns on wide roads, let alone develop a sentient desire to entrap humanity. There’s no AI I’m aware of that can flexibly understand the meaning of speech, human, intent, or common-sense facts. It’s certainly not impossible AI folks will get there. I never say “never”! But it’s nowhere near imminent.
But when it comes to M3GAN, what is even more sci-fi than the way the robot speaks and “thinks” …
… is the way it/she moves.
When you watch that trailer, you see a robot the size of a young girl that is effortlessly nimble. It walks into a house and wanders. It gently strokes the human child on the cheek; it grabs and wields various weapons, and executes funky dance moves including full-on Olympic front-flips.
This is far more deliriously and hilariously sci-fi than any “thinking” or “sentience” that M3GAN displays. And that’s because of a rule which, in the world of robotics, is known as “Moravec’s Paradox”.