Mücahithan Avcıoğlu
06 May 2026•Update: 06 May 2026
Google DeepMind, Microsoft and xAI have signed agreements with the US government to allow early testing of advanced artificial intelligence models for national security risks, the Commerce Department said Tuesday.
The department's Center for AI Standards and Innovation said the partnerships will allow pre-deployment testing and research to better assess advanced AI capabilities and improve security.
The agreements will also let the government evaluate models before public release and conduct post-deployment assessments.
The center said it has already completed more than 40 evaluations, including reviews of unreleased frontier AI systems.
Developers frequently provide the center with models whose safeguards have been reduced or removed to allow more comprehensive evaluation of national security-related capabilities and risks, it added.
“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” Director Chris Fall said.
“These expanded industry collaborations help us scale our work in the public interest at a critical moment,” Fall added.
The center has been designated as the primary US government point of contact for industry on testing, collaborative research and best-practice development related to commercial AI systems.