Comment on Frontiers in AI for Science, Security, and Technology (FASST) Initiative

Claudia Wilson
,
November 12, 2024
Read the Full Comment

Response to Department of Energy

On the 12th of September, the Department of Energy’s (DOE) Office of Critical and Emerging Technologies released a request for information on Frontiers in AI for Science, Security, and Technology (FASST) Initiative. The FASST initiative seeks to build the world's most powerful, integrated scientific AI models for scientific discovery, applied energy deployment, and national security applications.

The Center for AI Policy provided responses to Question 3(a), which asked about open sourcing scientific and applied energy AI models, and Question 3(c), which asked about considerations for the DOE’s ongoing AI red-teaming and safety risks.

In response to these questions, we provide the following policy proposals: 

  • Safety testing: Consider both technical risks, which occur without malicious actors, and misuse risk when conducting safety testing.
  • Partner organizations: Work with government agencies (e.g., the AI Safety Institute) and industry partners (e.g., METR) to tailor safety testing to a DOE context.
  • Open sourcing: Adjust Know Your Customer (KYC) requirements and available model components based on the results of safety testing.

Read the full comment here.

Comment on Safety Considerations for Chemical and/or Biological AI Models

CAIP's response to the AI Safety Institute

December 2, 2024
Learn More
Read more

Coalition Urges Congress to Pass Responsible AI Legislation Before Year End

Twelve organizations call for action on bipartisan artificial intelligence legislation

November 22, 2024
Learn More
Read more

Comment on Bolstering Data Center Growth, Resilience, and Security

CAIP's response to the National Telecommunications and Information Administration (NTIA)

November 4, 2024
Learn More
Read more