Kimi Claw: Browser Native AI Agents With 5,000 Skills

Łukasz Grochal

Moonshot AI has rolled out Kimi Claw, a browser native AI agent platform built on the OpenClaw orchestration standard and tightly integrated with the Kimi assistant. It runs directly in the browser as a persistent “always on” agent that can search the web, work with documents, and automate multi step workflows rather than just answering one off prompts. Core to the offering is ClawHub, a marketplace of more than 5,000 community contributed skills that cover data analysis, research pipelines, SEO, coding, image handling, and various business automations, so users can plug in ready made workflows instead of building everything from scratch.

The service also bundles 40 GB of cloud storage for files, reports, and outputs, which lets agents store context, retrieve past work, and keep version history across devices. Kimi Claw leans on Moonshot’s recent K2 and K2.5 models for multimodal reasoning and long context, so it can combine web search, tools, and uploaded materials in a single workflow. Overall it is positioned less as a simple chatbot and more as an AI agent “infrastructure” layer for power users, teams, and small businesses that want automation with a relatively low barrier to entry and a growing ecosystem of skills, while still leaving room for more advanced users to chain tools, customize agents, and extend the platform as the marketplace matures.

References(4)
Sources
Palantier Dilemma Human Rights vs Sercurity

Europe's Palantir Boom Amid Sovereignty and Rights Fears

Project Glasswing: Anthropic Mythos Zero-Day Exploit Finder Art

Claude Mythos Leak Ignites Fears of Unstoppable AI Exploits

OpenRouter LLM Leaderboard April

Chinese AI Models Dominate OpenRouter Top Six in Token Usage

Claude Code’s Big npm Leak

Inside the Claude Code Leak and Anthropic’s Agent Design

China AI accelerator card shipments vs NVIDIA 2025 chart

NVIDIA’s AI Chip Share in China Drops from 95% to 55%

TurboQuant KV Cache Compression Visualization

Google’s TurboQuant makes AI caches smaller and faster

Black Forest Labs FLUX.2 klein

FLUX.2 klein 9B-KV Explained: Speed, Quality, GPUs

Nvidia Slashes LLM Context Memory With KVTC Design

KVTC: Nvidia’s 20x LLM Memory Cut Without Retraining

OpenAI Sora shutdown concept

Sora’s Short Life: Inside OpenAI’s Quiet Retreat

Stitch (stitch.withgoogle.com) experimental Google Labs tool

Google Stitch: From simple prompt to working app UI