The Dangers Of Highly Centralized AI

Large language models are controlled by a tiny cohort of corporations

Clive Thompson

--

Photo by Emiliano Vittoriosi on Unsplash

The new regime of large language models — used in products like ChatGPT and Bard — raise a ton of concerns.

For one, the models are prone to bullshitting, squirting out answers that are filled unpredictably with falsehoods. The models are also based on the mass hoovering-up of online text authored by humans, a practice that raises ethical questions. They make it trivially easy to flood the Internet with material ranging from shoulder-shrugging, eh-good-enough-I-guess SEOized chum to flat-out disinfo. They might cannibalize a ton of jobs. They highlight the way many tech leaders radically devalue and misunderstand human experience: “Hey, maybe we meatsacks are just stochastic parrots too, amirite?” That’s just for starters; you can add many other concerns besides.

Mind you, ardent fans of these big models believe these problems are either overblown or outweighed by possible upsides. The models can be used as a creative impetus, they argue; they can propel spiffy new automations that make individual workers far more efficient. They could be used as a learning tool, quickly summarizing entire fields and documents. Again, you keep this list going too, if you want.

But I’m going to all of these aside, just for a minute, to ponder one problem I don’t think is getting quite enough attention:

The field of large language models is becoming dangerously centralized. A huge amount of power resides in the hands of a tiny number of firms.

After all, sticking for now to the realm of English-language models, the main players are OpenAI, Google, and Meta. These firms are the ones propelling ChatGPT, Bing’s chatbot, and Bard. The reason huge models have emerged from huge firms is they require huge resources: Mammoth amounts of cloud compute and enough electricity to boil Olympic swimming pools. Those tech giants are the only ones with those resources. (That’s one reason OpenAI, a comparatively small firm, initially partnered with Microsoft: For the latter’s sprawling cloud farms.)

As MIT AI professor Alexandr Madry told a House subcommittee this winter (from a story in Politico)

--

--

Clive Thompson

I write 2X a week on tech, science, culture — and how those collide. Writer at NYT mag/Wired; author, “Coders”. @clive@saturation.social clive@clivethompson.net