AI Agents: Governing Autonomy in the Digital Age

May 22, 2025
Read the Full Report

Autonomous AI agents—goal-directed, intelligent systems that can plan tasks, use external tools, and act for hours or days with minimal guidance—are moving from research labs into mainstream operations. But the same capabilities that drive efficiency also open new fault lines. An agent that can stealthily obtain and spend millions of dollars, cripple a main power line, or manipulate critical infrastructure systems would be disastrous.

This report identifies three pressing risks from AI agents. First, catastrophic misuse: the same capabilities that streamline business could enable cyber-intrusions or lower barriers to dangerous attacks. Second, gradual human disempowerment: as more decisions migrate to opaque algorithms, power drifts away from human oversight long before any dramatic failure occurs. Third, workforce displacement: decision-level automation spreads faster and reaches deeper than earlier software waves, putting both employment and wage stability under pressure. Goldman Sachs projects that tasks equivalent to roughly 300 million full-time positions worldwide could be automated.

In light of these risks, Congress should:

  1. Create an Autonomy Passport. Before releasing AI agents with advanced capabilities such as handling money, controlling devices, or running code, companies should register them in a federal system that tracks what the agent can do, where it can operate, how it was tested for safety, and who to contact in emergencies.
  2. Mandate continuous oversight and recall authority. High-capability agents should operate within digital guardrails that limit them to pre-approved actions, while CISA maintains authority to quickly suspend problematic deployments when issues arise.
  3. Keep humans in the loop for high consequence domains. When an agent recommends actions that could endanger life, move large sums, or alter critical infrastructure, a professional, e.g., physician, compliance officer, grid engineer, or authorized official, must review and approve the action before it executes.
  4. Monitor workforce impacts. Direct federal agencies to publish annual reports tracking job displacement and wage trends, building on existing bipartisan proposals like the Jobs of the Future Act to provide ready-made legislative language.

These measures are focused squarely on where autonomy creates the highest risk, ensuring that low-risk innovation can flourish. Together, they act to protect the public and preserve American leadership in AI before the next generation of agents goes live.

Read the full report here.

AI at the Cyber Frontier: Securing America's Digital Future

Report on the cybersecurity implications of evolving AI capabilities, including actionable policy guidance for Congress.

Read more

U.S. Open-Source AI Governance

Balancing ideological and geopolitical considerations with China competition

Read more

Composing the Future: AI's Role in Transforming Music

What AI music generation will mean for our creativity, current and future musicians, and the art in general

Read more