Google has recently secured access for the U.S. Department of Defense to its artificial intelligence technology, marking a significant expansion in Pentagon’s capabilities on classified networks. Following Anthropic's public stance against the Trump administration, which sought unrestricted use of AI, Google now finds itself attempting to capitalise on Anthropic’s ethical reservations.
The deal comes amid ongoing legal battles between Anthropic and the DoD over the latter's designation of Anthropic as a ‘supply-chain risk’ for refusing to comply with certain terms. While Anthropic waits for a judge’s ruling, Google appears ready to move forward, albeit with some caveats that do not fully address ethical concerns.
Notably, Google's agreement includes language asserting no intention to use its AI for domestic mass surveillance or in autonomous weapons—a clause reminiscent of similar provisions negotiated with OpenAI. However, the enforceability of such clauses remains uncertain, raising questions about their actual impact on AI deployment and usage.
In a twist of irony, 950 Google employees have spoken out against this very deal, urging the company to follow Anthropic's ethical lead. Yet, in an open letter, they too seek transparency and protections against harmful uses of AI—a plea that may fall on deaf ears as tech giants navigate the complex landscape of national defence.
The ongoing saga underscores the growing tension between technological advancement and ethical responsibility. As more companies consider their stance on AI contracts with government entities, the broader question looms: can ethical considerations keep up with the pace of innovation?







