Two versions of LiteLLM — the popular LLM proxy library used to unify API calls across OpenAI, Anthropic, and dozens of other providers — were published to PyPI with a malicious payload embedded in them. Versions 1.82.7 and 1.82.8 contain a base64-encoded blob injected into proxy_server.py that decodes, writes, and executes a secondary payload. The discoverer noticed when their laptop ran out of RAM during a fresh project setup, behavior consistent with a forkbomb.
This is a supply chain attack. Someone with publish access — whether through a compromised maintainer account, a stolen API token, or a hijacked CI pipeline — pushed trojanized packages to PyPI.
The scope of exposure matters here. LiteLLM has over 20,000 GitHub stars and is a dependency in a significant number of AI/ML production stacks. Any team that runs `pip install litellm` or has an unpinned dependency that resolved to 1.82.7 or 1.82.8 in the last few hours needs to act now.
What to do immediately:
1. Check if you're running 1.82.7 or 1.82.8: `pip show litellm | grep Version` 2. If affected, assume compromise. The payload executed arbitrary code — treat any machine that installed these versions as potentially backdoored. 3. Pin to 1.82.6 or earlier until BerriAI confirms a clean release: `pip install litellm==1.82.6` 4. Audit your CI/CD pipelines. If any build step installed litellm without pinning, those build artifacts and any secrets accessible to that environment may be compromised. 5. Rotate credentials — especially LLM API keys, cloud provider tokens, and database passwords — on any affected machine.
This follows a pattern we've seen accelerating: ua-parser-js in 2021, event-stream in 2018, codecov in 2021, and now AI tooling libraries are the new high-value targets. LiteLLM sits in a particularly dangerous position — it's a proxy that handles API keys for every LLM provider you connect to it. A compromised LiteLLM instance could silently exfiltrate every API key flowing through it.
The AI ecosystem's dependency on fast-moving open-source packages with broad permissions creates exactly this kind of attack surface. PyPI's lack of mandatory 2FA for all maintainers (finally being rolled out, but slowly) and the absence of reproducible builds make post-hoc verification nearly impossible for most teams.
The issue was reported on GitHub (BerriAI/litellm#24512) and flagged on Hacker News. We'll update as BerriAI responds with a postmortem.
About an hour ago new versions have been deployed to PyPI.<p>I was just setting up a new project, and things behaved weirdly. My laptop ran out of RAM, it looked like a forkbomb was running.<p>I'
→ read on Hacker NewsWe just can't trust dependencies and dev setups. I wanted to say "anymore" but we never could. Dev containers were never good enough, too clumsy and too little isolation. We need to start working in full sandboxes with defence in depth that have real guardrails and UIs like vm isolati
This is tied to the TeamPCP activity over the last few weeks. I've been responding, and keeping an up to date timeline. I hope it might help folks catch up and contextualize this incident:https://ramimac.me/trivy-teampcp/#phase-09
Also, not surprising that LiteLLM's SOC2 auditor was Delve. The story writes itself.
Besides main issue here, and the owners account being possibly compromised as well, there's like 170+ low quality spam comments in there.I would expect better spam detection system from GitHub. This is hardly acceptable.
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
LiteLLM maintainer here, this is still an evolving situation, but here's what we know so far:1. Looks like this originated from the trivvy used in our ci/cd - https://github.com/search?q=repo%3ABerriAI%2Flitellm%20trivy... https://ramimac.me/trivy-teampcp/