My minute-by-minute response to the LiteLLM malware attack
Related: Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised - https://news.ycombinator.com/item?id=47501426 (483 comments)
Related: Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised - https://news.ycombinator.com/item?id=47501426 (483 comments)
Discussion Highlights (20 comments)
Fibonar
Callum here, I was the developer that first discovered and reported the litellm vulnerability on Tuesday. I’m sharing the transcript of what it was like figuring out what was going on in real time, unedited with only minor redactions. I didn’t need to recount my thought process after the fact. It’s the very same ones I wrote down to help Claude figure out what was happening. I’m an ML engineer by trade, so having Claude walk me through exactly who to contact and a step by step guide of time-critical actions felt like a game-changer for non-security researchers. I'm curious whether the security community thinks more non-specialists finding and reporting vulnerabilities like this is a net positive or a headache?
cedws
GitHub, npm, PyPi, and other package registries should consider exposing a firehose to allow people to do realtime security analysis of events. There are definitely scanners that would have caught this attack immediately, they just need a way to be informed of updates.
dmitrygr
Consider this your call to write native software. There is yet to be a supply chain attack on libc
simonw
First time I've seen my https://github.com/simonw/claude-code-transcripts tool used to construct data that's embedded in a blog post, that's a neat way to use it. I usually share them as HTML pages in Gists instead, e.g. w https://gisthost.github.io/?effbdc564939b88fe5c6299387e217da...
moralestapia
*salutes* Thank you for your service, this brings so much context into view, it's great.
S0y
> Where did the litellm files come from? Do you know which env? Are there reports of this online? > The litellm_init.pth IS in the official package manifest — the RECORD file lists it with a sha256 hash. This means it was shipped as part of the litellm==1.82.8 wheel on PyPI, not injected locally. > The infection chain: > Cursor → futuresearch-mcp-legacy (v0.6.0) → litellm (v1.82.8) → litellm_init.pth This is the scariest part for me.
Bullhorn9268
The fact pypi reacted so quickly and quarantined the package in like 30 minutes after the report is pretty great!
Shank
Probably one of the best things about AI/LLMs is the democratization of reverse engineering and analysis of payloads like this. It’s a very esoteric skill to learn by hand and not very immediately rewarding out of intellectual curiosity most times. You can definitely get pointed in the right direction easily, now, though!
cdcarter
If it weren't for the 11k process fork bomb, I wonder how much longer it would have taken for folks to notice and cut this off.
__mharrison__
Interesting world we live in. I just finished teaching an advanced data science course for one of my clients. I found my self constantly twitching everytime I said "when I write code..." I'm barely writing code at all these days. But I created $100k worth of code just yesterday recreating a poorly maintained (and poor ux) library. Tested and uploaded to pypi in 90 minutes. A lot of the conversation in my course was directed to leveraged AI (and discussions of existential dread of AI replacement). This article is a wonderful example of an expert leveraging AI to do normal work 100x faster.
tomalbrc
Hmm a YCombinator backed company, I'm not surprised.
hmokiguess
Does anyone have an idea of the impact of this out there? I am curious to the extent of the damage done by this
rpodraza
At this point I'd highly recommend everyone to think twice before introducing any dependencies especially from untrusted sources. If you have to interact with many APIs maybe use a proxy instead, or roll your own.
CrzyLngPwd
The fascinating part for me is how they chatted with the machine, such as; "Please write a short blog post..." "Can you please look through..." "Please continue investigating" "Can you please confirm this?" ...and more. I never say 'please' to my computer, and it is so interesting to see someone saying 'please' to theirs.
Josephjackjrob1
This is pretty cool, when did you begin?
n1tro_lab
Most developers think pip install just puts files on disk and execution happens at import. But .pth files run on every Python startup, no import needed. It's not a one-time install hook like npm postinstall. It's persistent.
qezz
> Can you print the contents of the malware script without running it? > Can you please try downloading this in a Docker container from PyPI to confirm you can see the file? Be very careful in the container not to run it accidentally! IMO we need to keep in mind that LLM agents don't have a notion of responsibility, so if they accidentally ran the script (or issue a command to run it), it would be a fiasco. Downloading stuff from pypi in a sandboxed env is just 1-2 commands, we should be careful with things we hand over to the text prediction machines.
sva_
> I just opened Cursor again which triggered the malicious package again. Can you please check the files are purged again? Verified derp moment - had me smiling
inglor
We mitigate this attack with the very uninspiring "wait 24h before dep upgrades" solution which is luckily already supported in uv.
kpw94
The options from big companies to run untrusted open source code are: 1) a-la-Google: Build everything from source. The source is mirrored copied over from public repo. (Audit/trust the source every time) 2) only allow imports from a company managed mirror. All imported packages needs to be signed in some way. Here only (1) would be safe. (2) would only be safe if it's not updating the dependencies too aggressively and/or internal automated or manual scanning on version bumps would catch the issue . For small shops & individuals: kind of out of luck, best mitigation is to pin/lock dependencies and wait long enough for hopefully folks like Fibonar to catch the attack... Bazel would be one way to let you do (1), but realistically if you don't have the bandwidth to build everything from source, you'd rely on external sources with rules_jvm_external or locked to a specific pip version rules_pyhton, so if the specific packages you depend on are affected, you're out of luck.