CAIP work

Model Legislation

The Center for AI Policy (CAIP) drafted the Responsible AI Act of 2025 (“RAIA”) as model legislation that could serve as a template for legislators who are interested in protecting against catastrophic AI risks.

Artificial intelligence poses a wide variety of important problems - including job loss, invasion of privacy, bias, errors, and threats to national security.

Our team’s particular area of expertise is in catastrophic risks from AI – risks that could kill millions of people or cause civilization to collapse. AI could turn out to be extremely dangerous if it is misused by terrorists to create biological weapons, if it destabilizes the international order and leads to all-out war, or if it gradually or suddenly escapes from human oversight and takes control of the world’s physical and economic resources.

Responsible AI Act of 2025 (RAIA)

The Center for AI Policy (CAIP) drafted the Responsible AI Act of 2025 (“RAIA”) as model legislation that could serve as a template for legislators who are interested in protecting against these catastrophic risks. 

Our model legislation is relatively ambitious compared to what’s being discussed in Congress as of mid-2025, but we see this legislation as something like the minimum set of policies that will need to pass in order to protect humanity and give us a reasonable shot at survival. Strong bipartisan majorities of the public have consistently said that they prefer a tougher regulatory approach to AI that protects against catastrophic risks, and large surveys of leading computer scientists have reliably shown that these risks are real and worth worrying about. We think it’s best to be honest about these risks and to advocate for policies that would give us a fighting chance to contain them.

We hope that Congressional offices will find some or all of our model legislation useful as they develop solutions to the problems posed by the rapid advance of AI. Every step toward these policies is a step in the right direction, so we’re happy to work with offices who want to develop a compromise bill or who want to split off a small portion of the bill and focus their work on a particular topic. We are flexible about having our name attached to any such efforts – the important thing for us is to move these policies forward, not to worry about who gets the credit.

If you want to work on commonsense AI guardrails, but the model legislation isn’t the right fit for your office, then please take a look at our 2025 Action Plan, which has three “bite-sized” policies that we’ve designed to be easy to understand and easy to support. You can also check out our bill endorsements page to get an idea of what other policies CAIP has been pleased to support.

Read a two-page executive summary of Model Legislation: Responsible AI Act (RAIA) here, a section-by-section explainer here, the full model legislation here, a cost estimate here, and a policy brief explaining our reasoning here.

Model Legislation: Responsible AI Act (RAIA)

Model Legislation

Apr 30, 2025

Model Legislation: Responsible AI Act (RAIA)

Our model legislation for requiring that AI be developed safely

Read more
Center for AI Policy Unveils Model Legislation to Regulate Frontier AI Systems

Model Legislation

Apr 29, 2025

Center for AI Policy Unveils Model Legislation to Regulate Frontier AI Systems

The Responsible AI Act of 2025 establishes critical testing requirements for advanced AI systems.

Read more
Release: Model Legislation to Ensure Safer and Responsible Advanced Artificial Intelligence‍

Release: Model Legislation to Ensure Safer and Responsible Advanced Artificial Intelligence‍

Announcing our proposal for the "Responsible Advanced Artificial Intelligence Act of 2024"

Read more