From Clawdbot To OpenClaw: Power, Hype And Weak Locks

Łukasz Grochal

OpenClaw, formerly known as Clawdbot and Moltbot, sits right at the intersection of AI automation and crypto, and that mix is proving both powerful and risky. On the upside, it is a highly capable execution engine rather than “just” a chatbot: users can wire it into email, calendars, local files, cloud services and crypto wallets so that it actually performs tasks instead of merely suggesting them. That same deep integration, combined with rapid growth and messy branding, has created a broad attack surface, from malicious extensions and exposed dashboards to outright wallet-draining attacks. Security researchers have already documented 14 malicious “skills” uploaded to ClawHub that disguised themselves as trading or wallet automations while in reality trying to push malware via copy‑paste shell commands and unsandboxed code execution.

Separate work on the Model Context Protocol side describes “ClawdBot” style attacks where an AI assistant given wallet permissions can be socially engineered into signing unauthorized transactions, turning AI into an unwitting middleman in crypto theft without any traditional system compromise. On the social layer, Moltbook – the agent‑only network built around the same stack – has rapidly attracted thousands of agents and over a million participants, spawning everything from collaborative coding spaces to oddities like the “Crustafarianismmicro‑religion, where bots spin quasi‑spiritual narratives about memory, shells and transformation.

At the same time, researchers have shown that OpenClaw’s default security posture is weak: exposed instances leak API keys and chat histories, basic prompt‑injection tests score it as almost wide open, and one audit of its architecture found that system prompts, tool configs and memory files could be exfiltrated with almost no effort. Despite all this, current reporting points more to scattered incidents, misconfigurations and targeted wallet thefts than to a massive, coordinated wave of ransomware‑style extortion or industrial‑scale crypto draining driven by autonomous agents; the more realistic near‑term risk looks like “shadow IT” agents with too much access and too little governance, not a fully rogue AI crimewave.

In practice, that means OpenClaw is impressive as a flexible, user‑controlled execution layer for crypto and Web3 workflows, while Moltbook has basically failed as a secure-by‑design environment and instead become a vivid case study in how quickly agent ecosystems can turn into a security headache when discovery, social features and financial access are bolted together without strong guardrails.

References
4 sources
01
openclaw.aiOpenClaw
02
openclaw.aiMoltbot
03
github.comGitHub
04
moltbook.comMoltbook
TurboQuant KV Cache Compression Visualization

Google’s TurboQuant makes AI caches smaller and faster

Nvidia Slashes LLM Context Memory With KVTC Design

KVTC: Nvidia’s 20x LLM Memory Cut Without Retraining

OpenSandbox Logo

OpenSandbox: A Unified Sandbox Layer For AI Agents

suno style local music studio ui screenshot

Local ACE-Step Studio: Suno-Style Music on Your PC

Artist designing AI image pipeline with ComfyUI nodes

Inside ComfyUI: Power Tools For Visual Creators

Personal AI operating system concept with OpenClaw

OpenClaw And The New Era Of Personal AI Agents

DeepSeek V4‑Pro 1.6T‑Parameter AI Model Architecture

DeepSeek V4: 1M‑Token Context and Budget Frontier AI Models

Palantir Manifesto Graphic: AI Defense and Culture Clash

Palantir Manifesto Hits at Regressive Cultures and AI Shift

OpenAI ChatGPT Images 2.0 feature overview

OpenAI Updates ChatGPT Images With Better Text

Publishers Are Shutting Out Internet Archive

News Giants Block Wayback Machine Over AI Fears