Letting AI Into “The Mind Club”
Or, the psychology and morality of living amongst chatbots and self-driving cars
Given how hot AI has been lately, there’s a regular debate about whether AI will become conscious. When could we say that it’s … sentient? That it has a mind?
I’m not gonna give you that answer, or even try. There is no widely-agreed-upon definition of what it means to be conscious, or how consciousness emerges in humans. It’s a super interesting question, and definitely important to explore! But as yet unanswered: Philosophers and scientist still hotly argue over it. Indeed, if you’ve heard any tech dudes proclaiming that today’s large-language-model chatbots are “conscious” or “sentient” or “alive” or whatevs, you are very likely in the presence of an argument that is, as we say, not even wrong.
But! There is a narrower question about AI and the mind, and unlike this previous one, it’s a question we can begin to probe.
To wit: what are the situations in which we humans regard AI as being conscious? When — and why — do we treat machines as if they possessed a mind?
And what precisely are the implications of that?
Yeah, this is a bit of a dodge of the original question, I realize. It’s the same dodge Alan Turing used when he formulated his “Imitation Game”, i.e., the Turing Test. As Turing argued, if a chatbot can fool you into thinking it’s a human, then it is, as he puts it, a thinking machine.
Per Turing, if we don’t have an agreed-upon way of quantifying how a mind works or what creates one, let’s consider instead what happens when another entity merely appears to possess a mind. How does that change the way we meatbag humans relate to it?
Back in 2016, long before ChatGPT roamed the planet, I read a book that offers some potentially useful tools for thinking about this. So I dusted it off two days ago and reread it — and sure enough, it shed some new light (for me) on our current debates about AI.