OpenAI begins training the next artificial intelligence model as it grapples with security issues

Unlock Editor’s Digest for free

OpenAI said it has begun training its software for the next generation of artificial intelligence, although the start-up has backed away from earlier claims that it wants to build “superintelligent” systems that are smarter than humans.

The San Francisco-based company said on Tuesday that it had begun production of a new AI system “to bring us to the next level of capability” and that its development would be overseen by a new safety and security committee.

But as OpenAI races to develop artificial intelligence, a senior OpenAI executive appears to back off on previous comments from its CEO, Sam Altman, that the ultimate goal is to build a “superintelligence” far more advanced than humans.

Anna Makanju, OpenAI’s vice president of global affairs, said in an interview with the Financial Times that its “mission” is to build artificial general intelligence capable of “cognitive tasks that humans can do today.”

“Our mission is to build AGI; I wouldn’t say that our mission is to build superintelligence,” said Makanju. “Superintelligence is technology that will be several orders of magnitude more intelligent than human beings on Earth.”

Altman told the FT in November that he spent half his time researching “how to build superintelligence”.

Liz Bourgeois, a spokeswoman for OpenAI, said superintelligence is not the company’s “mission.”

“Our mission is AGI that is useful to humanity,” she said after the FT story was first published on Tuesday. “To achieve this, we are also studying superintelligence, which we generally think of as systems more intelligent than AGI.” She disputed any suggestion that the two were at odds.

At the same time as it fends off competition from Google’s Gemini and Elon Musk’s xAI start-up, OpenAI is trying to convince policymakers that it is prioritizing responsible AI development after several senior security researchers quit this month.

Its new board will be led by Altman and board directors Bret Taylor, Adam D’Angelo and Nicole Seligman, with the remaining three board members reporting to it.

The company didn’t say what the GPT-4 sequel, which powers its ChatGPT app and got a major upgrade two weeks ago, might do or when it would launch.

Earlier this month, OpenAI disbanded its so-called superalignment team — tasked with focusing on the security of potentially superintelligent systems — after Ilya Sutskever, the team’s leader and co-founder of the company, quit.

Sutskever’s departure comes months after he led a coup against Altman in November that ultimately proved unsuccessful.

The shutdown of the supertuning team resulted in the departure of several employees from the company, including Jan Leike, another senior AI security researcher.

Makanju emphasized that work is still being done on the “long-term possibilities” of artificial intelligence, “even if they are theoretical.”

“AGI doesn’t exist yet,” Makanju added, and said such technology would not be released until it was secure.

Training is the primary step in how an AI model learns, relying on the vast amount of data and information given to it. After it has digested the data and its performance has improved, the model is then validated and tested before being implemented into products or applications.

This lengthy and highly technical process means that OpenAI’s new model may not become a tangible product for several months.

Additional reporting by Madhumita Murgia in London

Video: AI: Blessing or Curse for Humanity? | FT Tech

Leave a Comment