OpenAI, the leading artificial intelligence (AI) research laboratory, has announced that it has begun training its next flagship AI model. The new model is expected to bring the company closer to achieving Artificial General Intelligence (AGI), a type of AI that can perform any intellectual task that a human being can do. This comes after OpenAI disbanded its previous oversight board and created a new safety and security committee, which will recommend critical safety and security decisions for OpenAI projects and operations.
The new safety group consists of CEO Sam Altman, Microsoft Build conference attendee Bret Taylor, Adam D’Angelo, Nicole Seligman, all members of OpenAI's board of directors. The formation of the new oversight team comes after former team leaders Ilya Sutskever and Jan Leike announced their departures from OpenAI. Leike stated that OpenAI's safety culture and processes have taken a backseat to shiny products.
OpenAI has begun training its next frontier model, which is anticipated to result in systems that bring the company closer to AGI. The new safety group will evaluate OpenAI's processes and safeguards over the next 90 days, sharing recommendations with the company's board. OpenAI will provide an update on adopted recommendations at a later date.
The development of advanced AI technology has been a topic of debate as experts disagree on when tech companies will reach AGI and what risks it may pose. Companies including OpenAI, Google, Meta, and Microsoft have steadily increased the power of AI technologies for more than a decade. OpenAI's GPT-4 model powers ChatGPT and enables chatbots and other software apps to answer questions, write emails, generate term papers and analyze data.
OpenAI's new safety committee will work to hone policies and processes for safeguarding the technology as it continues its pursuit of AGI. The company aims to move AI technology forward faster than its rivals while also addressing concerns about the risks posed by advanced AI systems.