Letting AI Into “The Mind Club”
Or, the psychology and morality of living amongst chatbots and self-driving cars
Given how hot AI has been lately, there’s a regular debate about whether AI will become conscious. When could we say that it’s … sentient? That it has a mind?
I’m not gonna give you that answer, or even try. There is no widely-agreed-upon definition of what it means to be conscious, or how consciousness emerges in humans. It’s a super interesting question, and definitely important to explore! But as yet unanswered: Philosophers and scientist still hotly argue over it. Indeed, if you’ve heard any tech dudes proclaiming that today’s large-language-model chatbots are “conscious” or “sentient” or “alive” or whatevs, you are very likely in the presence of an argument that is, as we say, not even wrong.
But! There is a narrower question about AI and the mind, and unlike this previous one, it’s a question we can begin to probe.
To wit: what are the situations in which we humans regard AI as being conscious? When — and why — do we treat machines as if they possessed a mind?
And what precisely are the implications of that?
Yeah, this is a bit of a dodge of the original question, I realize. It’s the same dodge Alan Turing used when he…