On March 24, 2026, a developer's laptop froze with 11,000 Python processes. What looked like a bug turned out to be a supply chain attack in one of AI's most popular packages.
LiteLLM is a widely-used Python package that provides a unified API for 100+ LLM providers. It's installed thousands of times daily. On March 24, version 1.82.8 was uploaded to PyPI containing a hidden payload:
litellm_init.pth - a file that executes on every Python startupmodels.litellm.cloudFrom discovery to disclosure: less than an hour. This is the speed at which security now moves in the AI era.
Here's the remarkable part: Claude Code helped analyze the entire attack in real-time. The victim used Claude Code to:
This isn't just about AI writing code anymore. AI is now accelerating security research - both for attackers and defenders.
Every pip install is a trust decision. This attack was in a package with 20K+ GitHub stars. Popularity does not equal security.
Python's site-packages .pth files execute arbitrary code on every Python startup. This is a known vector but rarely discussed. Check your environments.
The victim was infected through an MCP server that installed litellm. Every MCP server you run is an additional attack surface. Audit your MCP configurations.
# Check for suspicious .pth files
find ~/.local/lib/python*/site-packages -name "*.pth" -exec cat {} \;
# Check for unexpected systemd services
ls ~/.config/systemd/user/
# Use pip-audit to check for known vulnerabilities
pip install pip-audit
pip-audit
# Pin your dependencies
pip freeze > requirements.txt
# Review and pin specific versions
ulimit -u 2048 to prevent fork bombsThe FutureSearch team noted something profound: "Developers not trained in security research can now sound the alarm at a much faster rate than previously. AI tooling has sped up not just the creation of malware but also the detection."
This attack was uploaded at 10:52 and detected by 11:40. In the pre-AI era, this could have gone undetected for weeks. The cat-and-mouse game just got faster.
For AI developers building with LLMs, agents, and MCP servers, the lesson is clear: your tools are now part of your attack surface. Every package, every MCP server, every agent action is a potential vector.
Learn how to set up AI tools safely with proper isolation, dependency management, and security best practices.
Related: LiteLLM AI Gateway Setup | AI Supply Chain Security