OpenAI Forms New Safety Committee as It Nears Artificial General Intelligence

Redmond, Washington, Washington, USA United States of America
Former team leaders Ilya Sutskever and Jan Leike announced their departures from OpenAI with concerns over the company's safety culture.
OpenAI is training its next flagship AI model that brings it closer to Artificial General Intelligence (AGI).
The company has disbanded its previous oversight board and created a new safety and security committee.
The new safety group consists of CEO Sam Altman, Bret Taylor, Adam D'Angelo, and Nicole Seligman.
The new safety group will evaluate OpenAI's processes and safeguards over the next 90 days.
OpenAI Forms New Safety Committee as It Nears Artificial General Intelligence

OpenAI, the leading artificial intelligence (AI) research laboratory, has announced that it has begun training its next flagship AI model. The new model is expected to bring the company closer to achieving Artificial General Intelligence (AGI), a type of AI that can perform any intellectual task that a human being can do. This comes after OpenAI disbanded its previous oversight board and created a new safety and security committee, which will recommend critical safety and security decisions for OpenAI projects and operations.

The new safety group consists of CEO Sam Altman, Microsoft Build conference attendee Bret Taylor, Adam D’Angelo, Nicole Seligman, all members of OpenAI's board of directors. The formation of the new oversight team comes after former team leaders Ilya Sutskever and Jan Leike announced their departures from OpenAI. Leike stated that OpenAI's safety culture and processes have taken a backseat to shiny products.

OpenAI has begun training its next frontier model, which is anticipated to result in systems that bring the company closer to AGI. The new safety group will evaluate OpenAI's processes and safeguards over the next 90 days, sharing recommendations with the company's board. OpenAI will provide an update on adopted recommendations at a later date.

The development of advanced AI technology has been a topic of debate as experts disagree on when tech companies will reach AGI and what risks it may pose. Companies including OpenAI, Google, Meta, and Microsoft have steadily increased the power of AI technologies for more than a decade. OpenAI's GPT-4 model powers ChatGPT and enables chatbots and other software apps to answer questions, write emails, generate term papers and analyze data.

OpenAI's new safety committee will work to hone policies and processes for safeguarding the technology as it continues its pursuit of AGI. The company aims to move AI technology forward faster than its rivals while also addressing concerns about the risks posed by advanced AI systems.



Confidence

90%

Doubts
  • Are there any risks associated with OpenAI's pursuit of AGI that have not been considered by the company?
  • How effective will the new safety measures be in preventing potential misuse or accidents with the AI technology?

Sources

99%

  • Unique Points
    • OpenAI has begun training a new flagship artificial intelligence model.
    • OpenAI aims to build ‘artificial general intelligence’ with the new model.
    • The new model will be used for various A.I. products including chatbots, digital assistants, search engines and image generators.
  • Accuracy
    No Contradictions at Time Of Publication
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

98%

  • Unique Points
    • OpenAI established a new committee to make recommendations to its board about safety and security.
    • CEO Sam Altman, board chair Bret Taylor, and board member Nicole Seligman will lead the new Safety and Security Committee.
    • Jan Leike, an OpenAI executive focused on AI safety, resigned from the company with criticisms of underinvestment in safety work and tension with leadership.
    • Ilya Sutskever, another leader of OpenAI’s superalignment team, also departed the company after a reversal in his support for CEO Sam Altman.
  • Accuracy
    • Jan Leike resigned from the company with criticisms of underinvestment in safety work and tension with leadership.
    • Ilya Sutskever also departed the company after a reversal in his support for CEO Sam Altman.
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

100%

  • Unique Points
    • OpenAI CEO Sam Altman is on the safety and security committee.
    • The new committee will recommend critical safety and security decisions for OpenAI projects and operations.
    • OpenAI has begun training its next frontier model, expected to bring the company closer to AGI (Artificial General Intelligence).
    • Former team leaders Ilya Sutskever and Jan Leike announced their departures from OpenAI.
    • Jan Leike stated that OpenAI’s safety culture and processes have taken a backseat to shiny products.
    • The new safety group will evaluate OpenAI’s processes and safeguards over the next 90 days, sharing recommendations with the company’s board.
    • OpenAI will provide an update on adopted recommendations at a later date.
  • Accuracy
    No Contradictions at Time Of Publication
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

92%

  • Unique Points
    • OpenAI is training the next AI model
    • Safety concerns are being addressed by OpenAI
  • Accuracy
    No Contradictions at Time Of Publication
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (0%)
    None Found At Time Of Publication