Cube IT Blog

The LiteLLM Attack: Why Your AI Tools Are Now a Security Target

On March 24, 2026, a widely-used AI software package was quietly poisoned. For roughly five hours, anyone who installed it got malware alongside their software — malware designed to steal every credential on their machine and send it to attackers. Most victims had no idea anything happened.

What is LiteLLM, and why should you care?

LiteLLM is a popular software library used by developers to connect applications to AI services like OpenAI, Anthropic, Google, and others from a single interface. It’s downloaded roughly 3.4 million times per day and sits at the center of a huge number of AI-powered applications, tools, and workflows — including many that businesses use every day without realizing it.

If your business uses any AI-powered tool built by a developer — a custom chatbot, an automated document processor, an AI assistant integrated into your software stack — there’s a real chance LiteLLM is somewhere in that chain. You’d never know by looking.

The core concept: supply chain attacks A supply chain attack doesn’t break into your systems directly. It compromises something your systems trust — a software package, a tool, a vendor — and rides that trust straight in. You didn’t click a bad link. You didn’t mistype a password. Your software just did what it was supposed to do.

How it happened, step by step

This was not a quick grab and go. A threat group called TeamPCP spent days working through the software supply chain methodically, stealing credentials at each step to fuel the next one.

March 1
The first breach: Aqua Security and Trivy
Attackers compromised Trivy, an open-source security scanning tool used inside automated build pipelines. Credentials from that breach were only partially rotated, leaving a door open.
March 19
Trivy weaponized, Checkmarx hit next
TeamPCP used the surviving credentials to inject malware into Trivy’s GitHub Actions workflow. Every pipeline that ran Trivy from this point silently had its secrets stolen. Checkmarx, another security tool, was hit the same day.
March 23 to 24
LiteLLM’s pipeline runs the poisoned Trivy
LiteLLM’s automated build process ran Trivy as part of its routine security checks without pinning a specific version. The compromised Trivy action stole LiteLLM’s PyPI publishing credentials.
March 24, 10:39 UTC
Malicious packages published to PyPI
Using the stolen credentials, attackers published LiteLLM versions 1.82.7 and 1.82.8 containing a multi-stage credential-stealing payload. Anyone who ran pip install litellm during this window got the malware.
March 24, approximately 16:00 UTC
Packages removed, incident disclosed
PyPI quarantined the malicious packages roughly five hours after publication. LiteLLM’s team suspended releases, rotated credentials, and began notifying affected users. Investigation is ongoing.

What the malware actually did

This was not crude. The payload was multi-stage, obfuscated with double base64 encoding, and designed to be thorough. Once it ran on a machine, it silently collected:

AWS, GCP, Azure credentials
SSH private keys
API keys from .env files
Kubernetes cluster secrets
Database passwords
Docker configuration
CI/CD pipeline secrets
Shell history and git configs

Everything it collected was encrypted with AES-256 and RSA-4096 and exfiltrated to attacker-controlled servers. If Kubernetes credentials were present, it attempted to plant a persistent backdoor across the entire cluster. One compromised developer machine could cascade into an organization’s entire cloud infrastructure.

The fork bomb discovery The attack was first noticed when a developer’s machine became unresponsive from RAM exhaustion. The malware contained an unintentional bug: a file that runs on every Python startup triggered a fork bomb, creating an exponential explosion of processes. The crash that crashed the attacker’s own operation is what led to public disclosure.
3.4M
daily downloads of LiteLLM before the attack
~5 hrs
window the malicious packages were publicly available
9.4
CVSS severity score, near maximum rating

Why this matters beyond the tech world

If you’re not a software developer, you might be thinking: “We don’t use LiteLLM. This isn’t our problem.” That’s worth examining carefully.

Healthcare practices

AI tools for clinical documentation, scheduling, and billing are proliferating. Many are built on open-source AI libraries. A compromised library can expose patient records and HIPAA-covered data without any direct breach of your systems.

Law firms

AI-assisted document review, contract analysis, and research tools are increasingly common. If those tools use a poisoned dependency, privileged client communications become collateral damage.

Financial advisors

AI tools for portfolio analysis, report generation, and client communications often sit on the same infrastructure as sensitive financial data. Supply chain malware doesn’t distinguish between the tool and the data it can reach.

Any SMB with a tech vendor

If any software vendor, MSP, or IT contractor in your ecosystem installed the poisoned package during the five-hour window, their compromised credentials could include access to your systems.

The bigger picture: AI tools are the new attack surface

This attack is notable for what it targeted: the AI development stack. LiteLLM sits between applications and AI providers, which means it routinely handles API keys for OpenAI, Anthropic, Google, and others. Compromising it doesn’t just steal credentials — it gives attackers access to whatever those credentials can reach in AI systems, potentially including data sent to and from those models.

Because LiteLLM typically sits directly between applications and multiple AI service providers, it often has access to API keys, environment variables, and other sensitive configuration data. Compromising a package in this position allows attackers to intercept and exfiltrate valuable secrets without needing to directly breach upstream systems.

This is the trend to watch: as AI tools become embedded in business operations, they become high-value targets. The sophistication of this attack — moving through four separate projects over three weeks before hitting LiteLLM — signals that threat actors are investing seriously in mapping and exploiting the AI supply chain.


What to do right now

Even if you’re not a developer, there are meaningful actions you can take to reduce your exposure from this and future supply chain attacks.

  • Ask any software vendor or IT contractor whether they installed or upgraded LiteLLM on March 24, 2026 between 10:39 and 16:00 UTC. If yes, treat their systems as potentially compromised and rotate any shared credentials immediately.
  • Audit which AI tools and services your business uses, especially newer additions. Understand what data flows through them and what credentials they hold.
  • Review your vendor contracts: do your technology vendors have incident notification obligations? Do they carry cyber liability insurance? Do they have documented vulnerability response processes?
  • For any systems that may have been exposed, rotate API keys, cloud credentials, and database passwords as a precaution. The cost of rotation is far lower than the cost of a confirmed breach.
  • Consider this a trigger to revisit third-party risk in your cybersecurity posture. Supply chain attacks are increasing in frequency and sophistication, and vendor security is now part of your security.