OpenAI, the company behind ChatGPT, is taking steps to address concerns about AI safety and governance.
CEO Sam Altman recently announced that OpenAI is working with the US AI Safety Institute to provide early access to its next major generative AI model for safety testing.
The move comes amid growing scrutiny of OpenAI’s commitment to AI safety and its influence on policymaking.
a few quick updates about safety at openai:
as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.
our team has been working with the US AI Safety Institute on an agreement where we would provide…
— Sam Altman (@sama) August 1, 2024
Collaboration with the US AI Safety Institute
The US AI Safety Institute, a federal agency that aims to assess and address risks in AI platforms, will have the opportunity to test OpenAI’s upcoming AI model before its public release. While details of the agreement are scarce, this collaboration represents a significant step toward greater transparency and external oversight of AI development.
The partnership follows a similar Open AI deal in June that saw a clash with the UK’s AI safety body, suggesting a pattern of engagement with government entities on AI safety issues.
Addressing security concerns
OpenAI’s recent actions appear to be a response to criticism over its perceived lack of prioritization of AI safety research. The company previously disbanded a unit working on controls for “superintelligent” AI systems, prompting high-profile resignations and public scrutiny.
In an effort to rebuild trust, Open AI has:
- Restrictive non-discrediting clauses removed .
- A security commission was created .
- Pledged 20% of its computing resources to security research.
However, some observers remain skeptical, particularly after OpenAI staffed its security committee with internal company staff and reassigned a top AI security executive.
Influencing AI policy
OpenAI’s engagement with government agencies and its backing of the Future of Innovation Act have raised questions about the company’s influence in AI policymaking. The timing of these moves, coupled with increased lobbying efforts by OpenAI, has led to speculation about potential regulatory capture.
Altman’s position on the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board further underscores the company’s growing involvement in AI policymaking.
Looking to the future
As AI technology rapidly advances, the balance between innovation and safety remains a key concern. OpenAI’s collaboration with the US AI Safety Institute represents a step toward more transparent and accountable AI development.
However, it also highlights the complex relationship between tech companies and regulators in shaping the future of AI governance.
The technology community and policymakers will be closely watching how this partnership develops and what impact it will have on the broader landscape of AI safety and regulation.