Nov 12, 2024 03:03 PM IST
OpenAI and rivals are trying to overcome delays and challenges for making larger and larger language models by using human-like ways for algorithms to “think”
Nov 12, 2024 03:03 PM IST
Artificial intelligence (AI) companies such as OpenAI are trying to overcome unexpected delays and challenges in developing larger language models by using more human-like ways for algorithms to “think”, news agency Reuters reported.
These techniques behind OpenAI’s new o1 model could possibly reshape the AI arms race. This is because there are limitations to a “bigger is better” philosophy achieved through “scaling up” current models by adding more data and computing power.
The report quoted Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI as saying such techniques have plateaued and that “the 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Everyone is looking for the next thing.”
“Scaling the right thing matters more now than ever,” he added.
Also Read: Nestle, PepsiCo sell substandard products in low-income countries like India, claims report
Researchers at major AI labs have been running into delays and disappointing outcomes in the race of creating a large language model that outperforms OpenAI’s GPT-4 model, which is nearly two years old, according to the report.
Training runs for these models end up costing tens of millions of dollars by simultaneously running hundreds of chips, according to the report which added that they more likely end up with hardware-induced failure because of how complicated the system is and that researchers may not know the eventual performance of the models until the end of the run, which can take months.
Also Read: Calls grow for Elon Musk to lead US artificial intelligence policy: Report
© 2018 INFC E Paper Powered by Infinity Compliance