Future-proofing: How much should we spend to avoid AI apocalypse?
Can we get the good of AI without the bad? A lot of money might help.

Representative Image
The world is plowing trillions of dollars into speeding the development of artificial intelligence, even though the technology could eventually find humanity to be inconvenient. With the help of robots or unwitting humans, an AI system could unleash a killer virus or destroy the food supply
Can we get the good of AI without the bad? A lot of money might help.
New research by Charles Jones, an economist at the Stanford Graduate School of Business, concludes that “spending at least 1 percent of GDP annually to mitigate AI risk can be justified.” That would be about $300 billion.
Jones doesn’t suggest how so much money might be spent, but it would presumably include salaries for top-tier computer scientists as well as lawyers and diplomats who could negotiate binding international treaties to control AI. That and some heavy computing horsepower.
Most of the safety spending today goes toward developing algorithms that align AI systems with human needs, trying to understand how AI models arrive at decisions, and searching for ways to keep control of AI as it gets increasingly powerful.
And so far these efforts have been running on limited funds. Global spending to mitigate the existential risk of AI was a little over $100 million last year, according to an estimate by Stephen McAleese, a software engineer, based on analysing grant databases. That’s 0.03% of what Jones said could be justified.
Jones presented his paper in September at a conference on the Stanford campus that brought together many of the top (human) brains in economics who are working on issues related to AI. Rather than waiting for journals to review and publish papers from the conference, which could take years, the National Bureau of Economic Research is collecting them into a book, parts of which have been published online. “There is some urgency to get things out,” said Anton Korinek, a University of Virginia economist and co-organiser.
The power of AI is increasing so rapidly that it’s hard to predict what things will look like even a year or two ahead. Jones admitted in his paper that when he started to think about how much should be spent to reduce the “existential” risks of AI, the question “at first struck me as too open-ended to be usefully addressed by standard economics.”
He took a shot anyway. He took into account how much nations spent to reduce the mortality rate from COVID-19. He gathered estimates, or guesstimates, of how likely AI is to wipe out humanity. He considered how effective risk mitigation spending is likely to be.
In almost every scenario Jones looked at, spending at least 1% of GDP annually for the next decade was justified. The average share in the simulations was 8% of GDP, a number Jones called “stunning.” The share would be even higher if society took into account the welfare of future generations, not just people alive today.
Jones did find scenarios in which spending a lot on risk mitigation would be a bad idea: If the extinction risk is low, or if the extinction risk is high but mitigation is hopeless. (God forbid.)
Not every paper at the Stanford conference was that grim. Betsey Stevenson, a professor of public policy and economics at the University of Michigan, predicted that AI would handle tedious work and free people up for more satisfying endeavors such as gardening, art or spending time with friends. Addressing a room full of intense economists, she noted that not everyone is as fulfilled by their occupations as they are.
Another conundrum the conference addressed was how governments would raise revenue when AI and robots are doing all the work. Korinek and Lee Lockwood, a University of Virginia colleague, concluded that taxes on labour would be replaced with taxes on consumption, which in turn would eventually be replaced by taxes on capital — namely, computers and robots.
c.2025 The New York Times Company

