Yep, that’s right — metaphors require tacit knowledge, so they run into the common-sense knowledge problem of AI. I first heard about it way back in 1999 when I was writing a story about Cyc, Doug Lenat’s attempt to manually codify a huge corpus of our human common sense knowledge. He had a wonderful (probably not unique to him, but nice nonetheless) way of putting it — which is that common sense knowledge is all the knowledge that’s invisible, or un-remarked-upon in everyday writing, conversation, discussion.
This is precisely what makes it so devilishly hard for AIs to absorb. At the moment, they’re mostly stuck with being trained on words, images and to a limited extent video. But human experience, the mechanisms by which we slowly gather our common sense understanding of the word, likely have many many more inbound modalities than just those three.