Anthropic's powerful Mythos AI model, intended to test the limits of cybersecurity but restricted to a select few companies through Project Glasswing, has been accessed by an unauthorized Discord group.
The group gained access on April 7th after using knowledge from a recent data breach, making educated guesses about its online location. Members have been actively engaging with the model for two weeks, providing evidence of their findings to Bloomberg without reportedly using it for cybersecurity purposes.
Anthropic has initiated an investigation but currently sees no evidence that their systems are compromised or that the unauthorized access extends beyond a third-party vendor environment. The company released Mythos to a limited number of companies for testing just on that day, making the breach timing suspicious.
The incident highlights ongoing concerns over AI model security and control, especially as governments and tech giants vie for access to cutting-edge technology. While Anthropic has no plans to release Mythos publicly due to fears it could be weaponized, this unauthorized access may prompt further discussions on AI regulation and safety measures.







