Anthropic CEO: 10-Year Ban on State AI Regulation is ‘Too Blunt’

Dario Amodei, CEO of AI safety startup Anthropic, has publicly criticised a Republican-backed proposal to impose a 10-year federal ban on state-level regulation of artificial intelligence. In a recent New York Times opinion piece, Amodei called the proposed moratorium “too blunt” and argued it could hinder the U.S.'s ability to effectively govern rapidly evolving AI technologies.

Dario Amodei, CEO of AI safety startup Anthropic, has publicly criticised a Republican-backed proposal to impose a 10-year federal ban on state-level regulation of artificial intelligence. In a recent New York Times opinion piece, Amodei called the proposed moratorium “too blunt” and argued it could hinder the U.S.’s ability to effectively govern rapidly evolving AI technologies.

Amodei: U.S. Needs Transparent AI Oversight, Not Regulatory Paralysis

The proposal, included in President Donald Trump’s tax reform package, seeks to preempt state-level AI regulation for a decade. It would override recent laws passed in multiple states to oversee high-risk AI use cases in sectors like healthcare, law enforcement, and employment.

Amodei warned that such a blanket ban—without a comprehensive federal framework in place—could leave the country vulnerable.

“A 10-year moratorium is far too blunt an instrument. AI is advancing too head-spinningly fast,” Amodei wrote. “Without a clear federal response, a moratorium would give us the worst of both worlds—no ability for states to act, and no national policy as a backstop.”

Anthropic CEO Calls for Federal Transparency Standards

Instead of sidelining state governments, Amodei urged Congress and the White House to establish a federal AI transparency standard. His recommendations include:

  • Mandatory risk assessments before releasing advanced AI models
  • Public disclosure of safety testing procedures and model limitations
  • Clear mitigation strategies for national security, bias, and misuse risks

“These are steps Anthropic already takes, and we welcome our competitors—OpenAI and Google DeepMind—who have also adopted similar transparency practices,” Amodei noted.

Bipartisan Opposition to the 10-Year AI Regulation Ban

The 10-year ban has triggered pushback from a bipartisan group of attorneys general, who argue that it undermines state efforts to regulate AI applications in sensitive and high-stakes areas. States like California, New York, and Illinois have recently enacted or proposed laws focused on biometric surveillance, algorithmic accountability, and AI-driven hiring tools.

Legal experts and lawmakers say the moratorium could stifle innovation and accountability at a local level, where many of the immediate impacts of AI are already being felt.

Why AI Transparency May Require Legislative Incentives

Amodei emphasised that current transparency practices among major AI developers are largely voluntary, but may not stay that way. As AI systems grow in scale and commercial value, corporate incentives to disclose model risks could fade.

“We may need legislative incentives to ensure transparency remains the norm—not the exception,” he warned.

Looking Ahead: Balancing AI Innovation and Regulation

The ongoing debate highlights a central tension in U.S. tech policy: how to foster innovation while managing the complex risks of frontier AI systems. Amodei’s stance reflects a growing consensus in the AI community that smart, adaptive, and enforceable federal regulation is necessary to protect both public interests and national security.

Share this article

Share your Comment

guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Read More

Trending Posts

Quick Links