The Risk of a New AI Winter
History has not been kind to AI pioneers who overpromise and underdeliver
Is winter coming?
I speak of an “AI winter”, of course.
They’re part of the boom-bust cycle that we’ve seen in AI history. During an AI boom, computer scientists and firms invent new techniques that seem exciting and powerful. Tech firms use them to build products that promise to make everyone’s lives easier (or more productive) and investors unleash a geyser of funding. Everyone — including starry-eyed journalists — begins overpromising, gushing about the artificial smarts that will be invented. Humanlike! Godlike! Omniscient!
That level of hype can’t be maintained, though — and at some point the industry starts underdelivering. The AI turns out to be surprisingly fail-ridden. Companies and people that try using it to solve everyday problems discover it’s prone to errors, often quite mundane ones.
Then an “AI winter” begins. Customers stop paying top dollar for AI products; investors close their purses. Journalists begin more critically appraising the landscape. And because everyone feels burned (or embarrassed), things slide into an overly negative cycle: Even the computer scientists and inventors with legitimately interesting new paths for AI can’t easily get funding to pursue them. This lasts for years.
There have been two big AI winters so far. Here’s a chart of them …
It started in 1956, when AI luminaries at a workshop in Dartmouth predicted that “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it”. Almost two decades of work followed, with giddy promises along the way: When the New York Times first reported on neural nets that the Navy was working on, the reporter described “the embryo of an electronic computer that the Navy expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence”. In this early upswing, companies and computer scientists were particularly entranced by the idea of automatic translation of…