Sorry I'm late replying to this -- you nicely hit upon all the really big and interesting points about AI that can behave as if it were human.
The first point, about whether a machine could have desires and intents and motivations: This is really central! I struggle to wrap my head around it. I've head some argue that extremely complex AI maybe liable to behave in ways we can't predict, which may be indistinguishable from "intent" -- I can see that. Bostrom suspects that intent will always be the goals and biases built into the AI to begin with, so we need to be super careful in thinking about those, which I'd agree with. Beyond that, the question of how a machine develops intent, or desires, sort of breaks my brain ... I can't figure out how to get there from here. It doesn't mean it's impossible; I never say "never". But I can't quite figure it out.
The other big thing you raise here is the more proximal concern, i.e. how humans will use AI that can pass off as human. It's a super powerful tool, eh?