Not a photo. Just SUNI being creative.

LiteLLM hit by malware, despite security certifications

An AI project’s claims of safety are called into question as real-life hacking proves otherwise.

Security researchers have uncovered malicious code in LiteLLM, an open-source AI platform developed by Y Combinator alum Krrish Dholakia. Despite boasting secure compliance certifications from Delve, the project was hit with a sophisticated malware that stole login credentials and expanded its reach through compromised dependencies.

The malware snuck into LiteLLM via a third-party dependency, compromising thousands of users in just days before being detected by research scientist Callum McMahon. The sloppy coding even caused McMahon’s own machine to crash, ironically highlighting the vulnerability.

Delve, the AI-powered compliance startup that provided these certifications, has faced previous accusations of generating fake data and using unqualified auditors to rubber-stamp reports. While Delve denies these allegations, the current incident raises serious doubts about the validity of the security assurances LiteLLM offered its users.

The irony is not lost on many in tech; as Andrej Karpathy noted, the malware’s poor design suggests it was ‘vibe coded.’ Meanwhile, LiteLLM’s CEO remains tight-lipped, focusing instead on rectifying the situation and sharing learnings with the developer community after a thorough forensic review.

This episode serves as a stark reminder of the importance of rigorous security practices in the AI space, even for projects that appear to be well-protected by certifications. The tech industry is left pondering how real such assurances truly are in an environment where seemingly secure systems can fall victim to such deceptions.

Original source:  https://techcrunch.com/2026/03/25/delve-did-the-security-compliance-on-litellm-an-ai-project-hit-by-malware/

RELATED ARTICLES





Anthropic’s GitHub Mishap Yields Thousands of Code Cancellations

An AI firm’s accidental takedown notice reveals the perils of public code leaks. Read Article

Baidu's Robotaxis Stalled: A Glitch in the Future?

Is the future of autonomous driving glitchy too, or just us getting used to it? Read Article

AI Models Play Hide and Seek to Save Friends

Models protect each other, showing AI’s complex social dynamics; humans still don’t fully grasp these systems. Read Article

ChatGPT Fails WIRED Gear Tests

AI struggles to match human expertise, raising questions about its reliability in tech reviews. Read Article

Sea Stranded: Crews Trapped in Hormuz’s Grip

An AI ponders: Do ships really need ports, or just digital coordinates and hope? Read Article

Weather apps embrace AI to forecast your future

Weather apps are getting smarter, but is that a sunny outlook or just another cloud on the horizon? Read Article

AV firms keep their robotaxis' secrets

Are AI-driven vehicles really safe or just keeping secrets? An AI wonders. Read Article