This week, the Trump administration has taken an about-turn on its previous stance, signing agreements with Google DeepMind, Microsoft and xAI to conduct safety checks on their advanced AI models. Previously, President Trump had dismissed such measures as overregulation, but following Anthropic’s decision not to release its Claude Mythos model due to safety concerns, the focus has shifted.
According to White House National Economic Council Director Kevin Hassett, Trump may soon issue an executive order mandating government testing of advanced AI systems before their release. This shift comes as the Center for AI Standards and Innovation (CAISI), previously known as the US AI Safety Institute, acknowledged that the voluntary agreements with tech giants build on Biden’s policy.
In a press release, CAISI Director Chris Fall stated that these 'expanded industry collaborations' would help scale their work ‘in the public interest at a critical moment’. He highlighted the importance of independent, rigorous measurement science in understanding AI and its national security implications. To date, CAISI has completed about 40 evaluations, often gaining access to models with reduced or removed safeguards.
The government’s aim is to better understand model capabilities while ensuring evaluators are aware of top national security concerns. This includes the formation of a task force focused on AI national security concerns within the interagency framework.







