The Center for AI Policy is bolstering government so we can manage powerful AI.

The Center for AI Policy (CAIP) is a non-partisan, non-profit advocacy group that worked with Congress from June 2023 through May 2025 to promote commonsense solutions to the catastrophic risks from advanced AI.

CAIP work

Advising Policymakers

AI will be incredibly transformative, and we’re collectively unprepared for many of its worst risks. To help solve this problem, CAIP drafted model legislation, advocated for bipartisan solutions, hosted events to foster discussion and information sharing, gave feedback on others’ policies, endorsed bills that would help protect against AI risk, and connected policymakers with leading experts in AI.

We don't just talk about risks. We develop and advocate for solutions.
We share policy proposals, draft model legislation, and give feedback on others' policies. This work is collaborative and iterative. We take in ideas from our network of leading researchers and practitioners to make recommendations that are both robust and practical. 

Whistleblower Protections for AI Employees

Whistleblowers are a powerful tool to minimize the risk of public harm from AI. Our latest research shows how proper protections can be designed to avoid concerns such as the violation of trade secrets.

June 19, 2025
Learn More
Read more

AI Agents: Governing Autonomy in the Digital Age

A report on policies to address the emerging risks of increasingly autonomous AI agents.

May 22, 2025
Learn More
Read more

Building Resilience to AI's Disruptions to Emergency Response

An emergency response system overwhelmed with AI-generated incidents is a crisis in the making.

May 6, 2025
Learn More
Read more
View our policy work

CAIP priorities

Our policy mission is simple:
require safe AI.

To ensure powerful AI is safe, we need effective governance. That’s why our policy recommendations focus on ensuring the government has enough:

  • Visibility and expertise to understand AI development
  • Adeptness and authority to respond to rapidly evolving risks
  • Infrastructure to support developers in innovating safely

Our Priorities

This work is collaborative and iterative. We take in ideas and feedback from our network of leading researchers and practitioners to make our recommendations both robust and practical. 

  • Build government capacity
  • Safeguard development
  • Mitigate extreme risk

As AI grows more capable, so do its risks. We must prepare governance now to keep pace. We are advocating policies to ensure the government has enough:

Visibility and expertise to understand AI development

Adeptness and authority to respond to rapidly evolving risks

Infrastructure to work with rather than against developers

  1. Visibility and expertise to understand AI development
  2. Adeptness and authority to respond to rapidly evolving risks
  3. Infrastructure to work with rather than against developers

Frequently asked questions

With AI advancing rapidly, we urgently need to develop the government’s capacity to rapidly identify and respond to AI's national security risks.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Tellus in metus vulputate eu scelerisque felis. Purus sit amet volutpat consequat mauris nunc congue nisi vitae.

Who makes up the CAIP team?

What is CAIP's mission?

What are CAIP’s funding sources and affiliations?

How can I get involved?