The US Artificial Intelligence Safety Institute (AISI) has a critical role to play for ensuring advanced AI is safe. So far, however, AISI has not been given resources commensurate with that responsibility or even received formal authorization from Congress. That’s something that must change for the US to lead on AI.
Roughly one year ago, AISI was created within the National Institute of Standards and Technology. AISI was directed to facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts.
AISI made significant progress in a short period of time. In addition to broad stakeholder engagement, AISI began pre-deployment testing of major new AI models through agreements with OpenAI and Anthropic. It was also part of the formation of a global network of AI safety institutes and preparation for the network’s upcoming inaugural meeting to advance AI safety at a technical level.
In October 2024, AISI was assigned even more responsibilities in a national security memo on AI. The memo designated AISI as the primary US government point of contact with private sector AI developers to facilitate voluntary pre- and post-public deployment testing for safety, security, and trustworthiness of frontier AI models. AISI was directed to:
Despite AISI’s progress and responsibilities, it remains hindered by limited resources and lack of formal authorization by Congress. As CAIP observed, compared to safety institutes in the UK, EU, Canada, and Singapore, the US has announced the least funding so far, with only $10 million authorized for FY24. If other safety institutes have superior resources to spearhead AI safety work, such as the UK AISI’s “Early lessons from evaluating frontier AI systems,” the US will be at a disadvantage. America is home to the world’s leading AI companies. Our AI safety institute should be positioned to lead as well. Further, because AISI was created by an agency-level administrative change as opposed to legislation, congressional authorization is needed to provide stability. Without putting AISI’s responsibilities into law, its scope could shift or even be eliminated through simple department-level action.
CAIP, along with top AI developers and a variety of industry and policy organizations, is calling on Congress to authorize AISI. This is a bipartisan issue. As Congress enters its lame-duck session, this is a pivotal moment to solidify support for AI safety. By empowering AISI, we can ensure that as AI models become more advanced, they are developed with standardized safety evaluations and benchmarks that protect and benefit society.
There’s more science to be done, but it’s not too early to start collecting reports from AI developers
The most recent CAIP podcast explores four principles to address ever-evolving AI
The United States hosted the inaugural meeting of a growing global network