OpenAI begins training the next AI model when safety concerns are addressed
Unlock Editor’s Digest for free
Roula Khalaf, FT Editor, picks her favorite stories in this weekly newsletter.
OpenAI says it has begun training on next-generation artificial intelligence software, even as the startup retracts previous statements that it wants to build intelligent “super-intelligent” systems than humans.
The San Francisco-based company said Tuesday that it has begun production of a new product artificial intelligence system “takes us to the next level of capability” and its development will be overseen by a new safety and security committee.
But while OpenAI is racing to advance AI, a senior OpenAI executive appears to have walked back CEO Sam Altman’s previous comments that their ultimate goal is to build a “superintelligence.” ” much more advanced than humans.
Anna Makanju, OpenAI’s vice president of global affairs, told the Financial Times in an interview that their “mission” is to build artificial general intelligence capable of performing “different tasks”. cognitive tasks that humans can do today.”
“Our mission is to build AGI; I wouldn’t say our mission is to build super intelligence,” Makanju said. “Superintelligence is a technology that will be much more intelligent than humans on Earth.”
Altman said FT in November that he spent half his time researching “how to build a superintelligence.”
At the same time fend off competition from Google and Gemini Elon Musk’s xAI startupOpenAI is trying to reassure policymakers that it is prioritizing responsible AI development after several senior safety researchers quit this month.
Its new committee will be led by Altman and board directors Bret Taylor, Adam D’Angelo and Nicole Seligman, and will report back to the remaining three board members.
The company did not say what the sequel to GPT-4, its ChatGPT application support, and has received a major upgrade two weeks ago, maybe do it or when will it come out.
Earlier this month, OpenAI disbanded its so-called hyperlink group – tasked with focusing on the safety of potential super-intelligent systems – after Ilya Sutskeverteam leader and co-founder of the company, retired.
Sutskever’s departure comes a few months after he caused a shock coup against Altman in November, that ultimately failed.
The closure of the super affiliate group led to several employees leaving the company, including Jan Leike, another senior AI safety researcher.
Makanju emphasized that research work on the “long-term capabilities” of AI is still being done “even if they are theoretical.”
“AGI doesn’t exist yet,” Makanju added, saying such technology would not be released until it was safe.
Training is a fundamental step in how an artificial intelligence model learns, based on the vast amounts of data and information fed to it. After processing the data and improving performance, the model is validated and tested before being deployed into products or applications.
This lengthy and highly technical process means OpenAI’s new model may not become a tangible product for many months.
Additional reporting by Madhumita Murgia in London