The Dangers Of Highly Centralized AI

Large language models are controlled by a tiny cohort of corporations

Clive Thompson
6 min readMar 31, 2023

--

Photo by Emiliano Vittoriosi on Unsplash

The new regime of large language models — used in products like ChatGPT and Bard — raise a ton of concerns.

For one, the models are prone to bullshitting, squirting out answers that are filled unpredictably with falsehoods. The models are also based on the mass hoovering-up of online text authored by humans, a practice that raises ethical questions. They make it trivially easy to flood the Internet with material ranging from shoulder-shrugging, eh-good-enough-I-guess SEOized chum to flat-out disinfo. They might cannibalize a ton of jobs. They highlight the way many tech leaders radically devalue and misunderstand human experience: “Hey, maybe we meatsacks are just stochastic parrots too, amirite?” That’s just for starters; you can add many other concerns besides.

Mind you, ardent fans of these big models believe these problems are either overblown or outweighed by possible upsides. The models can be used as a creative impetus, they argue; they can propel spiffy new automations that make individual workers far more efficient. They could be used as a learning tool, quickly summarizing entire fields and documents. Again, you keep this list going too, if you want.

--

--

Clive Thompson

I write 2X a week on tech, science, culture — and how those collide. Writer at NYT mag/Wired; author, “Coders”. @clive@saturation.social clive@clivethompson.net