About The Center for AI Policy

CAIP develops policy & conducts advocacy to mitigate catastrophic risks from AI.

The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy.

Based in Washington, DC, CAIP works to ensure AI is developed and implemented with effective safety standards.

CAIP in the Press

The Center for AI Policy (CAIP) has been proud to participate in public discussions about how to build commonsense guardrails that will keep AI safe for all Americans. AI can be a dense and technical topic, but we do our part to make it accessible – here’s a sample of our coverage from mainstream publications:

  • The Hill – Potential cuts at AI Safety Institute stoke concerns in tech industry.
  • Politico – Superintelligent AI fears: They’re baaa-ack!
  • CBS KNOE 8 News – Congress gets crash course in dangers, benefits of AI
  • Fortune – AI safety advocates slam targeting of standards agency
  • N.Y. Times – Emboldened by Trump, AI companies lobby for fewer rules
  • Fox News – Experts praise long-awaited AI report from Congress
  • Scripps News Morning Rush – TV interview on the risks of AI
  • DC Journal – To prevent an AI catastrophe, Biden’s EO needs teeth
  • WWL First News with Tommy Tucker – Are we taking the threats from artificial intelligence seriously enough?

Frequently asked questions

With AI advancing rapidly, we urgently need to develop the government’s capacity to rapidly identify and respond to AI's national security risks.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Tellus in metus vulputate eu scelerisque felis. Purus sit amet volutpat consequat mauris nunc congue nisi vitae.

Who makes up the CAIP team?

What is CAIP's mission?

What are CAIP’s funding sources and affiliations?

How can I get involved?