Letter to the Editor of Reason Magazine

Jason Green-Lowe
,
July 8, 2024

Below is a letter to the editor (LTE) regarding Neil Chilson's recent critique of the Center for AI Policy's model AI safety legislation in his post on Reason entitled, "The Authoritarian Side of Effective Altruism Comes for AI." You can see Chilson's post here.

********

Dear Editor,

Neil Chilson's recent critique of the Center for AI Policy's model AI safety legislation is deeply misleading. Contrary to his claims, the Responsible Artificial Intelligence Act (RAAIA) does not broadly regulate benign AI systems like weather forecasting models. It explicitly exempts them from focusing oversight on only the most advanced and potentially dangerous AI systems.

Chilson also falsely asserts that RAAIA would force all open-source AI projects to track their users. The reality is that open-source efforts would be exempt from application fees, and the bill requires regulators to apply safety rules fairly to avoid disadvantaging open approaches. Finally, while Chilson inflates the duration of proposed emergency powers to 6 months, they actually would lapse after 2 months without presidential approval - a reasonable precaution for true AI emergencies.

It's crucial that Congress swiftly passes RAAIA to ensure we have a world-class and commonsense regulatory framework in place before AI systems become too powerful to control. Narrowly targeted rules and emergency backstops will help us harness AI's immense benefits while mitigating catastrophic risks. It's time to act on AI safety before it's too late.

Sincerely,

Jason Green-Lowe

Executive Director

Center for AI Policy (CAIP)

********

Learn more about CAIP's model legislation, the Responsible Advanced Artificial Intelligence Act of 2024 (RAAIA).

The model legislation contains several key policies requiring that AI be developed safely, including permitting, hardware monitoring, civil liability reform, a dedicated government office, and emergency powers.

A Playbook for AI: Discussing Principles for a Safe and Innovative Future

The most recent CAIP podcast explores four principles to address ever-evolving AI

November 27, 2024
Learn More
Read more

CAIP Celebrates the International Network of AI Safety Institutes

The United States hosted the inaugural meeting of a growing global network

November 26, 2024
Learn More
Read more

Biden and Xi’s Statement on AI and Nuclear Is Just the Tip of the Iceberg

Analyzing present and future military uses of AI

November 21, 2024
Learn More
Read more