Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised
I was just setting up a new project, and things behaved weirdly. My laptop ran out of RAM, it looked like a forkbomb was running.
I've investigated, and found that a base64 encoded blob has been added to proxy_server.py.
It writes and decodes another file which it then runs.
I'm in the process of reporting this upstream, but wanted to give everyone here a headsup.
It is also reported in this issue: https://github.com/BerriAI/litellm/issues/24512
- detente18 - 23013 sekunder sedanLiteLLM maintainer here, this is still an evolving situation, but here's what we know so far:
1. Looks like this originated from the trivvy used in our ci/cd - https://github.com/search?q=repo%3ABerriAI%2Flitellm%20trivy... https://ramimac.me/trivy-teampcp/#phase-09
2. If you're on the proxy docker, you were not impacted. We pin our versions in the requirements.txt
3. The package is in quarantine on pypi - this blocks all downloads.
We are investigating the issue, and seeing how we can harden things. I'm sorry for this.
- Krrish
- jFriedensreich - 23284 sekunder sedanWe just can't trust dependencies and dev setups. I wanted to say "anymore" but we never could. Dev containers were never good enough, too clumsy and too little isolation. We need to start working in full sandboxes with defence in depth that have real guardrails and UIs like vm isolation + container primitives and allow lists, egress filters, seccomp, gvisor and more but with much better usability. Its the same requirements we have for agent runtimes, lets use this momentum to make our dev environments safer! In such an environment the container would crash, we see the violations, delete it and dont' have to worry about it. We should treat this as an everyday possibility not as an isolated security incident.
- ramimac - 24920 sekunder sedanThis is tied to the TeamPCP activity over the last few weeks. I've been responding, and keeping an up to date timeline. I hope it might help folks catch up and contextualize this incident:
- hiciu - 26421 sekunder sedanBesides main issue here, and the owners account being possibly compromised as well, there's like 170+ low quality spam comments in there.
I would expect better spam detection system from GitHub. This is hardly acceptable.
- eoskx - 18140 sekunder sedanAlso, not surprising that LiteLLM's SOC2 auditor was Delve. The story writes itself.
- intothemild - 25145 sekunder sedanI just installed Harbor, and it instantly pegged my cpu.. i was lucky to see my processes before the system hard locked.
Basically it forkbombed `grep -r rpcuser\rpcpassword` processes trying to find cryptowallets or something. I saw that they spawned from harness, and killed it.
Got lucky, no backdoor installed here from what i could make out of the binary
- rdevilla - 24695 sekunder sedanIt will only take one agent-led compromise to get some Claude-authored underhanded C into llvm or linux or something and then we will all finally need to reflect on trusting trust at last and forevermore.
- santiago-pl - 3733 sekunder sedanIt looks like Trivy was compromised at least five days ago. https://www.wiz.io/blog/trivy-compromised-teampcp-supply-cha...
- cedws - 22713 sekunder sedanThis looks like the same TeamPCP that compromised Trivy. Notice how the issue is full of bot replies. It was the same in Trivy’s case.
This threat actor seems to be very quickly capitalising on stolen credentials, wouldn’t be surprised if they’re leveraging LLMs to do the bulk of the work.
- shay_ker - 24266 sekunder sedanA general question - how do frontier AI companies handle scenarios like this in their training data? If they train their models naively, then training data injection seems very possible and could make models silently pwn people.
Do the labs label code versions with an associated CVE to label them as compromised (telling the model what NOT to do)? Do they do adversarial RL environments to teach what's good/bad? I'm very curious since it's inevitable some pwned code ends up as training data no matter what.
- bratao - 26649 sekunder sedanLook like the Founder and CTO account has been compromised. https://github.com/krrishdholakia
- Nayjest - 1279 sekunder sedanUse secure and minimalistic lm-proxy instead:
https://github.com/Nayjest/lm-proxy
``` pip install lm-proxy ```
Guys, sorry, as the author of a competing opensource product, I couldn’t resist
- ting0 - 7307 sekunder sedanI've been waiting for something like this to happen. It's just too easy to pull off. I've been hard-pinning all of my versions of dependencies and using older versions in any new projects I set up for a little while, because they've generally at least been around long enough to vet. But even that has its own set of risks (for example, what if I accidently pin a vulnerable version). Either that, or I fork everything, including all the deps, run LLMs over the codebase to vet everything.
Even still though, we can't really trust any open-source software any more that has third party dependencies, because the chains can be so complex and long it's impossible to vet everything.
It's just too easy to spam out open-source software now, which also means it's too easy to create thousands of infected repos with sophisticated and clever supply chain attacks planted deeply inside them. Ones that can be surfaced at any time, too. LLMs have compounded this risk 100x.
- f311a - 21858 sekunder sedanTheir previous release would be easily caught by static analysis. PTH is a novel technique.
Run all your new dependencies through static analysis and don't install the latest versions.
I implemented static analysis for Python that detects close to 90% of such injections.
- syllogism - 18125 sekunder sedanMaintainers need to keep a wall between the package publishing and public repos. Currently what people are doing is configuring the public repo as a Trusted Publisher directly. This means you can trigger the package publication from the repo itself, and the public repo is a huge surface area.
Configure the CI to make a release with the artefacts attached. Then have an entirely private repo that can't be triggered automatically as the publisher. The publisher repo fetches the artefacts and does the pypi/npm/whatever release.
- nickvec - 24545 sekunder sedanLooks like all of the LiteLLM CEO’s public repos have been updated with the description “teampcp owns BerriAI” https://github.com/krrishdholakia
- ajoy - 2425 sekunder sedanReminded me of a similar story at openSSH, wonderfully documented in a "Veritasium" episode, which was just fascinating to watch/listen.
- tom_alexander - 23354 sekunder sedanOnly tangentially related: Is there some joke/meme I'm not aware of? The github comment thread is flooded with identical comments like "Thanks, that helped!", "Thanks for the tip!", and "This was the answer I was looking for."
Since they all seem positive, it doesn't seem like an attack but I thought the general etiquette for github issues was to use the emoji reactions to show support so the comment thread only contains substantive comments.
- sschueller - 25385 sekunder sedanDoes anyone know a good alternate project that works similarly (share multipple LLMs across a set of users)? LiteLLM has been getting worse and trying to get me to upgrade to a paid version. I also had issues with creating tokens for other users etc.
- eoskx - 23966 sekunder sedanThis is bad, especially from a downstream dependency perspective. DSPy and CrewAI also import LiteLLM, so you could not be using LiteLLM as a gateway, but still importing it via those libraries for agents, etc.
- santiagobasulto - 21549 sekunder sedanI blogged about this last year[0]...
> ### Software Supply Chain is a Pain in the A*
> On top of that, the room for vulnerabilities and supply chain attacks has increased dramatically
AI Is not about fancy models, is about plain old Software Engineering. I strongly advised our team of "not-so-senior" devs to not use LiteLLM or LangChain or anything like that and just stick to `requests.post('...')".
[0] https://sb.thoughts.ar/posts/2025/12/03/ai-is-all-about-soft...
- ilusion - 1739 sekunder sedanDoes this mean opencode (and other such agent harnesses that auto update) might also be compromised?
- cpburns2009 - 26767 sekunder sedanYou can see it for yourself here:
https://inspector.pypi.io/project/litellm/1.82.8/packages/fd...
- sudorm - 796 sekunder sedanare there any timestamps available when the malicious versions were published on pypi? I can't find anything but that now the last "good" version was published on march 22.
- abhisek - 20135 sekunder sedanWe just analysed the payload. Technical details here: https://safedep.io/malicious-litellm-1-82-8-analysis/
We are looking at similar attack vectors (pth injection), signatures etc. in other PyPI packages that we know of.
- noobermin - 11231 sekunder sedanI have to say, the long line of comments from obvious bots thanking the opener of the issue is a bit too on the nose.
- macNchz - 12422 sekunder sedanWas curious—good number of projects out there with an un-pinned LiteLLM dependencies in their requirements.txt (628 matches): https://github.com/search?q=path%3A*%2Frequirements.txt%20%2...
or pyproject.toml (not possible to filter based on absence of a uv.lock, but at a glance it's missing from many of these): https://github.com/search?q=path%3A*%2Fpyproject.toml+%22%5C...
or setup.py: https://github.com/search?q=path%3A*%2Fsetup.py+%22%5C%22lit...
- Shank - 19251 sekunder sedanI wonder at what point ecosystems just force a credential rotation. Trivy and now LiteLLM have probably cleaned out a sizable number of credentials, and now it's up to each person and/or team to rotate. TeamPCP is sitting on a treasure trove of credentials and based on this, they're probably carefully mapping out what they can exploit and building payloads for each one.
It would be interesting if Python, NPM, Rubygems, etc all just decided to initiate an ecosystem-wide credential reset. On one hand, it would be highly disruptive. On the other hand, it would probably stop the damage from spreading.
- postalcoder - 25367 sekunder sedanThis is a brutal one. A ton of people use litellm as their gateway.
- mohsen1 - 23099 sekunder sedanIf it was not spinning so many Python processes and not overwhelming the system with those (friends found out this is consuming too much CPU from the fan noise!) it would have been much more successful. So similar to xz attack
it does a lot of CPU intensive work
spawn background python decode embedded stage run inner collector if data collected: write attacker public key generate random AES key encrypt stolen data with AES encrypt AES key with attacker RSA pubkey tar both encrypted files POST archive to remote host - Ayc0 - 1622 sekunder sedanExactly what I needed, thanks.
- mark_l_watson - 20858 sekunder sedanA question from a non-python-security-expert: is committing uv.lock files for specific versions, and only infrequently updating versions a reasonable practice?
- rgambee - 25521 sekunder sedanLooking forward to a Veritasium video about this in the future, like the one they recently did about the xz backdoor.
- aborsy - 8250 sekunder sedanWhat is the best way to sandbox LLMs and packages in general, while being able to work on data from outside sandbox (get data in and out easily)?
There is also the need for data sanitation, because the attacker could distribute compromised files through user’s data which will later be run and compromise the host.
- kevml - 26793 sekunder sedanMore details here: https://futuresearch.ai/blog/litellm-pypi-supply-chain-attac...
- cpburns2009 - 17894 sekunder sedanLooks like litellm is no longer in quarantine on PyPI, and the compromized versions (1.82.7 and 1.82.8) have been removed [1].
- foota - 11096 sekunder sedanSomewhat unrelated, but if I have downloaded node modules in the last couple days, how should I best figure out if I've been hacked?
- 6thbit - 25171 sekunder sedantitle is bit misleading.
The package was directly compromised, not “by supply chain attack”.
If you use the compromised package, your supply chain is compromised.
- 0fflineuser - 24855 sekunder sedanI was running it (as a proxy) in my homelab with docker compose using the litellm/litellm:latest image https://hub.docker.com/layers/litellm/litellm/latest/images/... , I don't think this was compromised as it is from 6 months ago and I checked it is the version 1.77.
I guess I am lucky as I have watchtower automatically update all my containers to the latest image every morning if there are new versions.
I also just added it to my homelab this sunday, I guess that's good timing haha.
- wswin - 22054 sekunder sedanI will wait with updating anything until this whole trivy case gets cleaned up.
- westoque - 9273 sekunder sedanmy takeaway from this is that it should now be MANDATORY to have an LLM do a scan on the entire codebase prior to release or artifact creation. do NOT use third party plugins for this. it's so easy to create your own github action to digest the whole codebase and inspect third party code. it costs tokens yes but it's also cached and should be negligible spend for the security it brings.
- hmokiguess - 22225 sekunder sedanWhat’s the best way to identify a compromised machine? Check uv, conda, pip, venv, etc across the filesystem? Any handy script around?
EDIT: here's what I did, would appreciate some sanity checking from someone who's more familiar with Python than I am, it's not my language of choice.
find / -name "litellm_init.pth" -type f 2>/dev/null
find / -path '/litellm-1.82..dist-info/METADATA' -exec grep -l 'Version: 1.82.[78]' {} \; 2>/dev/null
- rgambee - 25671 sekunder sedanSeems that the GitHub account of one of the maintainers has been fully compromised. They closed the GitHub issue for this problem. And all their personal repos have been edited to say "teampcp owns BerriAI". Here's one example: https://github.com/krrishdholakia/blackjack_python/commit/8f...
- dec0dedab0de - 24120 sekunder sedangithub, pypi, npm, homebrew, cpan, etc etc. should adopt a multi-multi-factor authentication approach for releases. Maybe have it kick in as a requirement after X amount of monthly downloads.
Basically, have all releases require multi-factor auth from more than one person before they go live.
A single person being compromised either technically, or by being hit on the head with a wrench, should not be able to release something malicious that effects so many people.
- xinayder - 24184 sekunder sedanWhen something like this happens, do security researchers instantly contact the hosting companies to suspend or block the domains used by the attackers?
- faxanalysis - 12928 sekunder sedanThis is secure bug impacting PyPi v1.82.7, v1.82.8. The idea of bracketing r-w-x mod package permissions for group id credential where litellm was installed.
- smakosh - 7747 sekunder sedanCheckout LLM Gateway: https://llmgateway.io
Migration guide: https://llmgateway.io/migration/litellm
- xunairah - 23559 sekunder sedanVersion 1.82.7 is also compromised. It doesn't have the pth file, but the payload is still in proxy/proxy_server.py.
- segalord - 20957 sekunder sedanLiteLLM has like a 1000 dependencies this is expected https://github.com/BerriAI/litellm/blob/main/requirements.tx...
- - 17228 sekunder sedan
- dev_tools_lab - 15455 sekunder sedanGood reminder to pin dependency versions and verify checksums. SHA256 verification should be standard for any tool that makes network calls.
- mikert89 - 25320 sekunder sedanWow this is in a lot of software
- lightedman - 11477 sekunder sedanWrite it yourself, fuzz/test it yourself, and build it yourself, or be forever subject to this exact issue.
This was taught in the 90s. Sad to see that lesson fading away.
- oncelearner - 24757 sekunder sedanThat's a bad supply-chain attack, many folks use litellm as main gateway
- 6thbit - 25003 sekunder sedanWorth exploring safeguard for some: The automatic import can be suppressed using Python interpreter’s -S option.
This would also disable site import so not viable generically for everyone without testing.
- tom-blk - 21300 sekunder sedanStuff like is happening too much recently. Seems like the more fast paced areas of development would benefit from a paradigm shift
- somehnguy - 5094 sekunder sedanPerhaps I'm missing something obvious - but what's up with the comments on the reported issue?
Hundreds of downvoted comments like "Worked like a charm, much appreciated.", "Thanks, that helped!", and "Great explanation, thanks for sharing."
- saidnooneever - 18191 sekunder sedanjust wanna state this can litterally happen to anyone within this messy package ecosystem. maintainer seems to be doing his best
if you have tips i am sure they are welcome. snark remarks are useless. dont be a sourpuss. if you know better, help the remediation effort.
- nickspacek - 26289 sekunder sedanteampcp taking credit?
https://github.com/krrishdholakia/blockchain/commit/556f2db3...
- # blockchain - Implements a skeleton framework of how to mine using blockchain, including the consensus algorithms. + teampcp owns BerriAI - Aeroi - 14570 sekunder sedanwhats up with the hundreds of bot replys on github to this?
- 0123456789ABCDE - 24038 sekunder sedanairflow, dagster, dspy, unsloth.ai, polar
- gkfasdfasdf - 25125 sekunder sedanSomeone needs to go to prison for this.
- rvz - 10527 sekunder sedanWhat do we have here? Unaudited software completely compromised with a fake SOC 2 and ISO 27001 certification.
An actual infosec audit would have rigorously enforced basic security best practices in preventing this supply chain attack.
- fratellobigio - 24737 sekunder sedanIt's been quarantined on PyPI
- - 5339 sekunder sedan
- johnhenry - 20102 sekunder sedanI've been developing an alternative to LiteLLM. Javascript. No dependencies. https://github.com/johnhenry/ai.matey/
- hmokiguess - 19667 sekunder sedanwhat's up with everyone in the issue thread thanking it, is this an irony trend or is that a flex on account takeover from teampcp? this feels wild
- kstenerud - 24263 sekunder sedanWe need real sandboxing. Out-of-process sandboxing, not in-process. The attacks are only going to get worse.
That's why I'm building https://github.com/kstenerud/yoloai
- Imustaskforhelp - 25285 sekunder sedanOur modern economy/software industry truly runs on egg-shells nowadays that engineers accounts are getting hacked to create a supply-chain attack all at the same time that threat actors are getting more advanced partially due to helps of LLM's.
First Trivy (which got compromised twice), now LiteLLM.
- claudiug - 16068 sekunder sedanLiteLLM's SOC2 auditor was Delve :))
- homanp - 19783 sekunder sedanHow were they compromised? Phishing?
- cowpig - 9097 sekunder sedanTried running the compromised package inside Greywall, because theoretically it should mitigate everything but in practice it just forkbombs itself?
- bfeynman - 27283 sekunder sedanpretty horrifying. I only use it as lightweight wrapper and will most likely move away from it entirely. Not worth the risk
- cpburns2009 - 24709 sekunder sedanLiteLLM is now in quarantine on PyPI [1]. Looks like burning a recovery token was worth it.
- danielvaughn - 22677 sekunder sedanI work with security researchers, so we've been on this since about an hour ago. One pain I've really come to feel is the complexity of Python environments. They've always been a pain, but in an incident like this, where you need to find whether an exact version of a package has ever been installed on your machine. All I can say is good luck.
The Python ecosystem provides too many nooks and crannies for malware to hide in.
- otabdeveloper4 - 24410 sekunder sedanLiteLLM is the second worst software project known to man. (First is LangChain. Third is OpenClaw.)
I'm sensing a pattern here, hmm.
- Blackthorn - 22763 sekunder sedanEdit: ignore this silliness, as it sidesteps the real problem. Leaving it here because we shouldn't remove our own stupidity.
It's pretty disappointing that safetensors has existed for multiple years now but people are still distributing pth files. Yes it requires more code to handle the loading and saving of models, but you'd think it would be worth it to avoid situations like this.
- zhisme - 22453 sekunder sedanAm I the only one having feeling that with LLM-era we have now bigger amount of malicious software lets say parsers/fetchers of credentials/ssh/private keys? And it is easier to produce them and then include in some 3rd party open-source software? Or it is just our attention gets focused on such things?
- chillfox - 25131 sekunder sedanNow I feel lucky that I switched to just using OpenRouter a year ago because LiteLLM was incredible flaky and kept causing outages.
- iwhalen - 26831 sekunder sedanWhat is happening in this issue thread? Why are there 100+ satisfied slop comments?
- deep_noz - 26507 sekunder sedangood i was too lazy to bump versions
- te_chris - 22578 sekunder sedanI reviewed the LiteLLM source a while back. Without wanting to be mean, it was a mess. Steered well clear.
- canberkh - 11586 sekunder sedanhelpful
- TZubiri - 25389 sekunder sedanThank you for posting this, interesting.
I hope that everyone's course of action will be uninstalling this package permanently, and avoiding the installation of packages similar to this.
In order to reduce supply chain risk not only does a vendor (even if gratis and OS) need to be evaluated, but the advantage it provides.
Exposing yourself to supply chain risk for an HTTP server dependency is natural. But exposing yourself for is-odd, or whatever this is, is not worth it.
Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.
And even if you weren't using this specific dependency, check your deps, you might have shit like this in your requirements.txt and was merely saved by chance.
An additional note is that the dev will probably post a post-mortem, what was learned, how it was fixed, maybe downplay the thing. Ignore that, the only reasonable step after this is closing a repo, but there's no incentive to do that.
- - 30312 sekunder sedan
- qcautomation - 2593 sekunder sedan[dead]
- - 5344 sekunder sedan
- devnotes77 - 5331 sekunder sedan[dead]
- pugchat - 14571 sekunder sedan[dead]
- jamiemallers - 19728 sekunder sedan[dead]
- maxothex - 21889 sekunder sedan[dead]
- matrixgard - 27717 sekunder sedan[dead]
- ddactic - 24772 sekunder sedan[dead]
- osaka2077 - 16109 sekunder sedan[dead]
- rsmtjohn - 23026 sekunder sedan[dead]
- peytongreen_dev - 25488 sekunder sedan[flagged]
- mitul005 - 23238 sekunder sedan[dead]
- bustah - 10539 sekunder sedan[dead]
- dot_treo - 29032 sekunder sedan[dead]
- thibault000 - 21251 sekunder sedan[dead]
- sy0115 - 8974 sekunder sedan[dead]
- bustah - 10489 sekunder sedan[dead]
- hahaddmmm12x - 24316 sekunder sedan[flagged]
- iamnotai666 - 11821 sekunder sedan[dead]
Nördnytt! 🤓