Center for AI Policy Unveils Model Legislation to Regulate Frontier AI Systems

April 29, 2025

WASHINGTON - The Center for AI Policy (CAIP) released its model legislation, the "Responsible AI Act of 2025" (RAIA), designed to establish a regulatory framework for the most powerful artificial intelligence systems while ensuring continued innovation in the field.

As frontier AI systems approach and potentially surpass human intelligence in the coming years, RAIA proposes a commonsense approach to mitigating catastrophic risks through independent verification, hardware security requirements, and a dedicated federal oversight body.

"The unchecked development of increasingly powerful AI systems creates unprecedented risks to public safety and national security," said Jason Green-Lowe, executive director of the Center for AI Policy. "The ‘Responsible AI Act of 2025’ provides a balanced framework that allows innovation to flourish while ensuring these systems remain firmly under responsible human control. This model legislation is creating a safety net for the digital age to ensure that exciting advancements in AI are not overwhelmed by the risks they pose."

Key provisions of the "Responsible AI Act of 2025" (RAIA):

Targeted scope: RAIA applies only to the largest general-purpose AI systems, specifically exempting developers who spend less than $1 billion on training or create narrow AI with limited applications.

Independent auditing system: Before receiving deployment permits, developers of frontier AI systems would need validation from independent auditors confirming adequate safeguards against catastrophic outcomes.

Hardware security: The model legislation includes minimum standards for physical security, cybersecurity, and know-your-customer protocols for AI data centers to prevent unauthorized access.

Monitoring and reporting: RAIA would establish a team of government experts to track AI trends and developments, providing critical intelligence to federal agencies.

Liability reform: The model legislation addresses current legal loopholes by articulating a standard of care and establishing clear liability frameworks for AI-related damages.

Emergency powers: RAIA outlines specific procedures for government intervention in the event of an AI emergency, including provisions for compensating innocent parties.

"Within the next few years, the largest artificial intelligence models will likely be smarter and more powerful than their human controllers," said Green-Lowe. "Under current law, private companies can deploy any AI model regardless of the danger it creates for public safety. It is unreasonable to bet the world's future on the chance that every frontier AI developer will always be perfectly responsible."

At the core of the legislation is a requirement for independent third-party testing and certification. Developers of frontier AI systems would need to be evaluated by independent auditors who would certify that sufficient safeguards exist to prevent catastrophic outcomes. A new federal office, the Frontier AI Administration, would review these audits and have the authority to require additional safeguards before issuing deployment permits.

"This model legislation plugs critical loopholes in our current regulatory framework by putting a second pair of eyes on the largest AI systems," Green-Lowe said. "We're not trying to stop AI progress—we're working to ensure AI remains beneficial by keeping it under meaningful human control."

The model legislation is designed to be a resource for lawmakers, industry leaders, and other stakeholders concerned with AI safety. 

Access the "Responsible AI Act of 2025” (RAIA) text here.

Read a two-page executive summary here, a section-by-section explainer here, and a policy brief explaining CAIP’s reasoning here.

The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Based in Washington, DC, CAIP works to ensure AI is developed and implemented with effective safety standards. Learn more at centeraipolicy.org.

###

CAIP Calls for Mandatory National Security Audits in Trump's 2025 AI Action Plan

CAIP emphasizes critical need for third-party oversight to secure America's AI future in response to OSTP's RFI.

Read more

CAIP Showcases Advanced AI Risks to Congress in First-of-its-Kind Tech Exhibition on Capitol Hill

Leading research institutions showcased real-time AI threats.

Read more

CAIP Responds to Reported Mass Layoffs at NIST's AI Safety Institute

The reported plans pose an alarming threat to our nation's ability to develop effective and responsible AI.

Read more