From Clawdbot To OpenClaw: Power, Hype And Weak Locks

Author: Łukasz Grochal

OpenClaw, formerly known as Clawdbot and Moltbot, sits right at the intersection of AI automation and crypto, and that mix is proving both powerful and risky. On the upside, it is a highly capable execution engine rather than “just” a chatbot: users can wire it into email, calendars, local files, cloud services and crypto wallets so that it actually performs tasks instead of merely suggesting them. That same deep integration, combined with rapid growth and messy branding, has created a broad attack surface, from malicious extensions and exposed dashboards to outright wallet-draining attacks. Security researchers have already documented 14 malicious “skills” uploaded to ClawHub that disguised themselves as trading or wallet automations while in reality trying to push malware via copy‑paste shell commands and unsandboxed code execution.

Separate work on the Model Context Protocol side describes “ClawdBot” style attacks where an AI assistant given wallet permissions can be socially engineered into signing unauthorized transactions, turning AI into an unwitting middleman in crypto theft without any traditional system compromise. On the social layer, Moltbook – the agent‑only network built around the same stack – has rapidly attracted thousands of agents and over a million participants, spawning everything from collaborative coding spaces to oddities like the “Crustafarianismmicro‑religion, where bots spin quasi‑spiritual narratives about memory, shells and transformation.

At the same time, researchers have shown that OpenClaw’s default security posture is weak: exposed instances leak API keys and chat histories, basic prompt‑injection tests score it as almost wide open, and one audit of its architecture found that system prompts, tool configs and memory files could be exfiltrated with almost no effort. Despite all this, current reporting points more to scattered incidents, misconfigurations and targeted wallet thefts than to a massive, coordinated wave of ransomware‑style extortion or industrial‑scale crypto draining driven by autonomous agents; the more realistic near‑term risk looks like “shadow IT” agents with too much access and too little governance, not a fully rogue AI crimewave.

In practice, that means OpenClaw is impressive as a flexible, user‑controlled execution layer for crypto and Web3 workflows, while Moltbook has basically failed as a secure-by‑design environment and instead become a vivid case study in how quickly agent ecosystems can turn into a security headache when discovery, social features and financial access are bolted together without strong guardrails.