OpenAI chief executive officer (CEO) Sam Altman has apparently downplayed the risk associated with artificial general intelligence (AGI). However, he warned that the ‘super intelligence’ will have the safety concerns and will be more significant.
Also Read: Bitcoin crosses $100,000 for first time on optimism over Donald Trump’s crypto plans
“My guess is we will hit AGI sooner than most people in the world think and it matter much less,” the report quoted him as saying in an interview with Andrew Ross Sorkin at The New York Times DealBook Summit on Wednesday.
He added, “A lot of the safety concerns that we and others expressed actually don’t come at the AGI moment. AGI can get built, the world mostly goes on in mostly the same way, things grow faster.”
“I expect the economic disruption to take a little longer than people think because there’s a lot of inertia in society,” the report quoted him as saying. “So, in the first couple of years, maybe not that much changes. And then maybe a lot changes.”
Also Read: India’s IndiGo, ranked among ‘world’s worst airlines’ in survey, reacts sharply | Top 10 carriers
However, he said what can be called ‘super intelligence’ will have those same safety concerns and will be way more significant, though it is a long way to reach there. ‘A few thousand days’ is what he estimates.
All of this is despite OpenAI’s charter once saying AGI will be able to “automate the great majority of intellectual labour.”
Altman recently teased that AGI could arrive as soon as 2025 and will be achievable on existing hardware, with it being possibly made by weaving together OpenAI’s large language models.
Also Read: Lok Sabha passes Banking Laws (Amendment) Bill, allows 4 nominees in bank accounts
However, he also says it with the backdrop of AGI’s arrival being an escape hatch from its exclusive, but messy deal with Microsoft wherein it can then get out of its profit-sharing arrangements with the tech giant.