Autonomous AI agents—goal-directed, intelligent systems that can plan tasks, use external tools, and act for hours or days with minimal guidance—are moving from research labs into mainstream operations. But the same capabilities that drive efficiency also open new fault lines. An agent that can stealthily obtain and spend millions of dollars, cripple a main power line, or manipulate critical infrastructure systems would be disastrous.
This report identifies three pressing risks from AI agents. First, catastrophic misuse: the same capabilities that streamline business could enable cyber-intrusions or lower barriers to dangerous attacks. Second, gradual human disempowerment: as more decisions migrate to opaque algorithms, power drifts away from human oversight long before any dramatic failure occurs. Third, workforce displacement: decision-level automation spreads faster and reaches deeper than earlier software waves, putting both employment and wage stability under pressure. Goldman Sachs projects that tasks equivalent to roughly 300 million full-time positions worldwide could be automated.
In light of these risks, Congress should:
These measures are focused squarely on where autonomy creates the highest risk, ensuring that low-risk innovation can flourish. Together, they act to protect the public and preserve American leadership in AI before the next generation of agents goes live.
Read the full report here.
Report on the cybersecurity implications of evolving AI capabilities, including actionable policy guidance for Congress.
Balancing ideological and geopolitical considerations with China competition
What AI music generation will mean for our creativity, current and future musicians, and the art in general