Glad you enjoyed it! As to measures to improve the factual reliability -- yeah, I can (roughly) imagine improving the factuality of these chatbots and talk-like-a-human AIs. One thing that might help is integrating more old-school hand-curated knowledge bases, like Wolfram Alpha, that runs as a "check" on top of everything the LLM generates. Human intelligence, as cognitive psychologists suspect, seems to be a bunch of different processes all working together, sometimes pulling in different directions and sometimes reinforcing one another ... it would seem unlikely to me that a *single* AI tool -- i.e. the next-word/next-clause predictive power of a large language model -- is alone sufficient to mimic human intelligence. But assembling a *bunch* of modules that work together and cross-check might be the way to go!
In one sense, "cyborg writer" applies to pretty much everyone these days! If one uses a search engine to find info that one incorporates into one's writing -- well, that's using a metric ton of machine learning in the service of your prose. So there's almost nothing we write that isn't cyborg, heh. By defining cyborg the way I did here -- to apply only to the uses of LLMs -- I'm contributing to the trend, a bad trend in my eyes, of equating "AI" with "LLMs" ... when in reality forms of machine learning and AI are all around us, all day long ...