Six months ago, Mercor, an AI data training startup, was soaring after raising $350 million and valuing itself at $10 billion. Now, it faces a crushing reality check: a damning data breach.
The hack, via the popular open-source tool LiteLLM, has exposed gigabytes of sensitive information, including candidate profiles, employer data, source code, and API keys. The company remains tight-lipped about the severity, only vowing to investigate and communicate with customers.
Meta, a key customer, has indefinitely paused contracts with Mercor due to the breach. Meanwhile, five contractors have initiated lawsuits over alleged personal data exposure. While OpenAI is investigating, other large players may reconsider their partnerships with Mercor.
The fallout extends beyond financials; Mercor was on track for $1 billion in annual revenue before the leak. As the dust settles, the incident raises questions about AI security and the vulnerability of even the most secure-seeming companies.







