Anthropic has called for urgent government intervention in AI policy within the next 18 months, warning that the window for proactive risk prevention is rapidly closing as AI capabilities advance at an unprecedented pace.

In a policy paper released Thursday, Anthropic detailed how AI systems have made dramatic progress in the past year, with models showing significant improvements in areas like cybersecurity and scientific reasoning. The company revealed that on the SWE-bench software engineering benchmark, AI performance jumped from solving 1.96% of real-world coding problems in October 2023 to 49% by October 2024.

"Current models can already assist on a broad range of cyber offence-related tasks, and we expect that the next generation of models—which will be able to plan over long, multi-step tasks—will be even more effective," Anthropic stated in its announcement.

The company emphasised that its internal "Frontier Red Team" has observed continued progress in AI capabilities related to chemical, biological, radiological, and nuclear (CBRN) applications. While the current advantage of frontier models over existing tools remains relatively small, Anthropic reports this gap is "growing rapidly."

Drawing from its experience implementing a Responsible Scaling Policy (RSP) since September 2023, Anthropic proposed three key elements for effective AI regulation:

1. Mandatory transparency about companies' safety policies and risk evaluations

2. Incentives for better safety and security practices

3. Simple, focused regulations that avoid unnecessary burdens

"RSPs serve as a forcing function for a developer to flesh out risks and threat models," Anthropic explained to readers of its policy paper. "Such models tend to be rather abstract, but an RSP makes them interact directly with the day-to-day business of a company."

The company acknowledged potential concerns about regulation slowing innovation but argued that carefully designed oversight could actually accelerate progress. "It's been our repeated experience at Anthropic that our safety research efforts have had unexpected spillover benefits to the science of AI in general," the company noted in its FAQ section.



Share this post
The link has been copied!