OpenAI said Tuesday that it has begun training a new flagship artificial intelligence model that will succeed its GPT-4 technology that powers ChatGPT, a popular online chatbot.
The San Francisco startup, one of the world’s leading AI companies, said in a blog post that it expects the new model to bring “next level capabilities” as it works to build “artificial general intelligence (AGI).” , a machine that can do everything the human brain can do. The new model will be an engine for AI products, including chatbots, digital assistants similar to Apple’s Siri, search engines and image generators.
OpenAI also said it would form a new safety and security committee to explore how to deal with risks posed by new models and future technologies.
“We are proud to have built and launched an industry-leading model for both performance and safety, and we welcome robust discussion at this important moment,” the company said.
OpenAI aims to advance AI technology faster than its competitors while appeasing critics who say the technology is increasingly dangerous, spreading disinformation, displacing jobs and even threatening humanity. Experts disagree on when technology companies will reach artificial general intelligence, but companies including OpenAI, Google, Meta, and Microsoft have been steadily increasing the power of AI technology for more than a decade, making notable leaps roughly every two to three years. It has been shown.
Launched in March 2023, OpenAI’s GPT-4 will allow chatbots and other software apps to answer questions, compose emails, generate term papers, and analyze data. An updated version of the technology, unveiled this month but not yet widely available, can also generate images and respond to questions and commands with an interactive voice.
Days after OpenAI unveiled an updated version called GPT-4o, actress Scarlett Johansson said it used a voice that “sounds eerily similar to mine.” She said OpenAI CEO Sam Altman had rebuffed her efforts to license her voice for products, and she hired a lawyer and asked OpenAI to stop using her voice. The company said the voice was not Johansson’s.
Technologies like GPT-4o learn by analyzing vast amounts of digital data, including sounds, photos, videos, Wikipedia articles, books, and news articles. The New York Times sued OpenAI and Microsoft last December, alleging copyright infringement of news content related to AI systems.
Digital ‘training’ of an AI model can take months or years. Once training is complete, AI companies typically spend several more months testing and fine-tuning the technology for public use.
This could mean that OpenAI’s next model won’t be released for another nine months to a year or more.
As OpenAI trains new models, a new safety and security committee will work to hone policies and processes to protect the technology, the company said. The committee includes Mr. Altman and OpenAI board members Bret Taylor, Adam D’Angelo and Nicole Seligman. The company said the new policy could be implemented in late summer or fall.
This month, OpenAI said Ilya Sutskever, its co-founder and one of the leaders of its safety efforts, was leaving the company. This has raised concerns that OpenAI is not doing enough to address the risks posed by AI.
Dr. Sutskever joined three other board members in removing Mr. Altman from OpenAI last November. Mr. Altman said he no longer had confidence in the company’s plans to create artificial general intelligence for the benefit of humanity. After a lobbying campaign by Mr. Altman’s colleagues, he was reinstated five days later and has since reasserted control of the company.
Dr. Sutskever leads what OpenAI calls the Superalignment team, which has explored ways to ensure that future AI models do no harm. Like others in the field, he grew increasingly concerned that AI would pose a threat to humanity.
Jan Leike, who ran the Superalignment team with Dr. Sutskever, resigned from the company this month, leaving the team’s future uncertain.
OpenAI has turned long-term safety research into a larger effort to ensure the safety of its technology. The effort will be led by another co-founder, John Schulman, who previously led the team that created ChatGPT. The new safety committee will oversee Dr. Schulman’s research and provide guidance on how the company will address technology risks.