OpenAI has unveiled a comprehensive update to its safety and security practices. This move comes as the company's AI models continue to advance in capability, and solve increasingly complex problems for millions of users worldwide.

The cornerstone of these changes is the transformation of OpenAI's Safety and Security Committee (SSC) into an independent Board oversight committee. This committee, chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University's School of Computer Science, will oversee critical safety and security measures related to model development and deployment.

Other notable members of the committee include Adam D'Angelo, Quora co-founder and CEO; retired US Army General Paul Nakasone; and Nicole Seligman, former EVP and General Counsel of Sony Corporation. The committee will have the authority to delay model releases until safety concerns are adequately addressed.

The company has outlined five key areas of focus for its enhanced safety and security practices:

1. Establishing independent governance for safety and security

2. Enhancing security measures

3. Increasing transparency about their work

4. Collaborating with external organisations

5. Unifying safety frameworks for model development and monitoring

In terms of cybersecurity, OpenAI is taking a risk-based approach, evolving its measures as the threat model and risk profiles of its models change. This includes expanded internal information segmentation, additional staffing for around-the-clock security operations, and continued investment in research and product infrastructure security.

The company is also exploring the development of an Information Sharing and Analysis Center (ISAC) for the AI industry to enable the sharing of threat intelligence and cybersecurity information among entities within the sector.

OpenAI is actively developing new partnerships with third-party safety organisations and non-governmental labs for independent model safety assessments.

The company has also reached agreements with the U.S. and U.K. AI Safety Institutes to collaborate on researching emerging AI safety risks and standards for trustworthy AI. Additionally, OpenAI is working with Los Alamos National Labs to study the safe use of AI in laboratory settings for bioscientific research advancement.



Share this post
The link has been copied!