When you connect a tool like OpenClaw to your internal systems, you hand it your passwords and API keys. Most tools store those in plain text -- like writing your bank password on a sticky note and leaving it on your desk. Anyone who gets access to the machine can read them.
That's exactly what happened this week.
What happened with LiteLLM
LiteLLM is a popular open-source tool that connects AI agents to models like Claude and ChatGPT. If you're using OpenClaw or similar tools, there's a good chance LiteLLM is running in the background routing your requests. It has 95 million downloads per month.
Attackers first compromised a security scanner called Trivy that LiteLLM used in its build process. That gave them the keys to push a malicious software update directly to PyPI, where developers download Python packages. Anyone who installed the update had every password and API key on their machine silently copied and sent to the attackers. No warning. Just installing the update was enough.
The malicious versions were live for about three hours before they were caught and removed.
Why this matters for anyone using AI automation
If your passwords are stored in plain text where the AI can see them, any breach in the chain exposes everything. It doesn't matter how good the AI model is or how useful the automation is. The weakest link is wherever the credentials live.
Most AI automation tools work like this: you connect your systems, the tool stores your API keys, and when a workflow runs it passes those keys to the AI model so it can make requests on your behalf. Your credentials end up sitting in the model's memory, flowing through infrastructure you don't control.
LiteLLM showed what happens when one link in that chain gets compromised.
An architecture where your keys never touch the AI
The approach I took with General Input comes down to three things:
1. Limit access with scopes. You control exactly what the AI can and can't do in each system. Read-only access to your CRM? Fine. No access to billing? Done. The AI only gets the permissions you explicitly grant.
2. Credentials encrypted at rest, never shown to the AI. The AI decides what to do. A separate execution engine handles the actual login to your tools at the moment it's needed, then strips the credentials before the AI ever sees the response. Think of it like giving someone driving directions without handing them your car keys.
3. Full audit log every time your system is accessed, with a reason included. Every API call, every field, every record -- logged with an explanation of why it was accessed. Not a summary, not "the workflow ran successfully." An actual log of what was touched, when, by which step, and for what purpose.
For teams with strict compliance requirements: self-hosted. Your cloud, your AI model, your credentials, your audit logs. Nothing crosses a boundary you don't own.
The fear is justified. The architecture should account for it.
Leaders are right to be cautious about connecting their business systems to AI. The question isn't whether to be nervous. It's whether the system you're using was built with that concern in mind from the start.
Sources:
