Heh, I see your point. Thinkers like Nick Bostrom, who spends a lot of time pondering the existential risk of superintelligent AI, argues precisely B): If and when a superintelligent AI arises, it might be quite likely to hide its true intelligence, for fear of alerting humanity.
Me, though, I think the answer is simpler:
C.) Humanity created a machine that does an extremely good job of statistically predicting the next likely-sounding clause, when given a text prompt. Not a lot going on here beyond that yet …