STOCKHOLM: Top artificial intelligence executives including OpenAI CEO Sam Altman on Tuesday joined experts and professors in raising the “risk of extinction from AI”, which they urged policymakers to equate at par with risks posed by pandemics and nuclear war. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” more than 350 signatories wrote in a letter published by the nonprofit Center for AI Safety (CAIS).
As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft and Google. Also among them were Geoffrey Hinton and Yoshua Bengio – two of the three so-called a”godfathers of AI” who received the 2018 Turing Award for their work on deep learning – and professors from institutions ranging from Harvard to China’s Tsinghua University.
A statement from CAIS singled out Meta, where the third godfather of AI, Yann LeCun, works, for not signing the letter. The letter coincided with the US-EU Trade and Technology Council meeting in Sweden where politicians are expected to talk about regulating AI. Elon Musk and a group of AI experts and industry executives were the first ones to cite potential risks to society in April.
The statement comes at a time of growing concern about the potential harms of AI. Recent advancements in so-called large language models – the type of AI system used by ChatGPT and other chatbots – have raised fears that AI could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs. Eventually, some believe, AI could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, although researchers sometimes stop short of explaining how that would happen.
These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building – and, in many cases, are furiously racing to build faster than their competitors – poses grave risks and should be regulated more tightly.
Recent developments in AI have sparked fears the technology could also lead to privacy violations, power misinformation campaigns, and lead to issues with “smart machines” thinking for themselves. AI pioneer Hinton earlier said AI could pose a “more urgent” threat to humanity than climate change. This month, Altman met with US President Joe Biden and Vice-President Kamala Harris to talk about AI regulation. In a Senate testimony after the meeting, Altman warned that the risks of advanced AI systems were serious enough to warrant government intervention and called for regulation of AI for its potential harms.
As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft and Google. Also among them were Geoffrey Hinton and Yoshua Bengio – two of the three so-called a”godfathers of AI” who received the 2018 Turing Award for their work on deep learning – and professors from institutions ranging from Harvard to China’s Tsinghua University.
A statement from CAIS singled out Meta, where the third godfather of AI, Yann LeCun, works, for not signing the letter. The letter coincided with the US-EU Trade and Technology Council meeting in Sweden where politicians are expected to talk about regulating AI. Elon Musk and a group of AI experts and industry executives were the first ones to cite potential risks to society in April.
The statement comes at a time of growing concern about the potential harms of AI. Recent advancements in so-called large language models – the type of AI system used by ChatGPT and other chatbots – have raised fears that AI could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs. Eventually, some believe, AI could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, although researchers sometimes stop short of explaining how that would happen.
These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building – and, in many cases, are furiously racing to build faster than their competitors – poses grave risks and should be regulated more tightly.
Recent developments in AI have sparked fears the technology could also lead to privacy violations, power misinformation campaigns, and lead to issues with “smart machines” thinking for themselves. AI pioneer Hinton earlier said AI could pose a “more urgent” threat to humanity than climate change. This month, Altman met with US President Joe Biden and Vice-President Kamala Harris to talk about AI regulation. In a Senate testimony after the meeting, Altman warned that the risks of advanced AI systems were serious enough to warrant government intervention and called for regulation of AI for its potential harms.