On March 24, 2026, a widely-used AI software package was quietly poisoned. For roughly five hours, anyone who installed it got malware alongside their software — malware designed to steal every credential on their machine and send it to attackers. Most victims had no idea anything happened.
What is LiteLLM, and why should you care?
LiteLLM is a popular software library used by developers to connect applications to AI services like OpenAI, Anthropic, Google, and others from a single interface. It’s downloaded roughly 3.4 million times per day and sits at the center of a huge number of AI-powered applications, tools, and workflows — including many that businesses use every day without realizing it.
If your business uses any AI-powered tool built by a developer — a custom chatbot, an automated document processor, an AI assistant integrated into your software stack — there’s a real chance LiteLLM is somewhere in that chain. You’d never know by looking.
How it happened, step by step
This was not a quick grab and go. A threat group called TeamPCP spent days working through the software supply chain methodically, stealing credentials at each step to fuel the next one.
What the malware actually did
This was not crude. The payload was multi-stage, obfuscated with double base64 encoding, and designed to be thorough. Once it ran on a machine, it silently collected:
Everything it collected was encrypted with AES-256 and RSA-4096 and exfiltrated to attacker-controlled servers. If Kubernetes credentials were present, it attempted to plant a persistent backdoor across the entire cluster. One compromised developer machine could cascade into an organization’s entire cloud infrastructure.
Why this matters beyond the tech world
If you’re not a software developer, you might be thinking: “We don’t use LiteLLM. This isn’t our problem.” That’s worth examining carefully.
Healthcare practices
AI tools for clinical documentation, scheduling, and billing are proliferating. Many are built on open-source AI libraries. A compromised library can expose patient records and HIPAA-covered data without any direct breach of your systems.
Law firms
AI-assisted document review, contract analysis, and research tools are increasingly common. If those tools use a poisoned dependency, privileged client communications become collateral damage.
Financial advisors
AI tools for portfolio analysis, report generation, and client communications often sit on the same infrastructure as sensitive financial data. Supply chain malware doesn’t distinguish between the tool and the data it can reach.
Any SMB with a tech vendor
If any software vendor, MSP, or IT contractor in your ecosystem installed the poisoned package during the five-hour window, their compromised credentials could include access to your systems.
The bigger picture: AI tools are the new attack surface
This attack is notable for what it targeted: the AI development stack. LiteLLM sits between applications and AI providers, which means it routinely handles API keys for OpenAI, Anthropic, Google, and others. Compromising it doesn’t just steal credentials — it gives attackers access to whatever those credentials can reach in AI systems, potentially including data sent to and from those models.
Because LiteLLM typically sits directly between applications and multiple AI service providers, it often has access to API keys, environment variables, and other sensitive configuration data. Compromising a package in this position allows attackers to intercept and exfiltrate valuable secrets without needing to directly breach upstream systems.
This is the trend to watch: as AI tools become embedded in business operations, they become high-value targets. The sophistication of this attack — moving through four separate projects over three weeks before hitting LiteLLM — signals that threat actors are investing seriously in mapping and exploiting the AI supply chain.
What to do right now
Even if you’re not a developer, there are meaningful actions you can take to reduce your exposure from this and future supply chain attacks.
- Ask any software vendor or IT contractor whether they installed or upgraded LiteLLM on March 24, 2026 between 10:39 and 16:00 UTC. If yes, treat their systems as potentially compromised and rotate any shared credentials immediately.
- Audit which AI tools and services your business uses, especially newer additions. Understand what data flows through them and what credentials they hold.
- Review your vendor contracts: do your technology vendors have incident notification obligations? Do they carry cyber liability insurance? Do they have documented vulnerability response processes?
- For any systems that may have been exposed, rotate API keys, cloud credentials, and database passwords as a precaution. The cost of rotation is far lower than the cost of a confirmed breach.
- Consider this a trigger to revisit third-party risk in your cybersecurity posture. Supply chain attacks are increasing in frequency and sophistication, and vendor security is now part of your security.