Kani TTS 2: Fast Open Voice Cloning on 3 GB VRAM

Łukasz Grochal

Kani TTS 2 is a compact open‑source text to speech model that aims to bring realistic voice synthesis and cloning to regular consumer hardware rather than datacenter‑class machines. With roughly 400 million parameters and a 22 kHz output, it runs in about 3 GB of VRAM while still offering zero‑shot voice cloning from a short reference clip and support for English and early multilingual variants. The system is built around a token based audio representation and a two stage pipeline that separates semantic/acoustic token generation from final waveform decoding, which helps it keep latency low enough for near real time conversations on modern GPUs. In practice, users report a real time factor around 0.2 on high end cards, so generating 10 seconds of speech takes roughly 2 seconds, and typical VRAM usage stays slightly under the advertised 3 GB limit.

The model also focuses on capturing accents and local speech styles, offering region flavored English voices and the option to mimic a custom speaker without per speaker fine tuning, which makes it attractive for tools, personal assistants and content workflows that need distinctive but reproducible voices.

At the same time, its relatively small size means it will not match very large proprietary systems in extreme expressiveness or multilingual coverage, and its strong voice cloning raises the usual ethical concerns around consent, impersonation and deepfake style misuse, so it is better seen as a practical, well engineered option in the open ecosystem rather than a magic solution to every TTS problem.

References(2)
Sources
Palantier Dilemma Human Rights vs Sercurity

Europe's Palantir Boom Amid Sovereignty and Rights Fears

Project Glasswing: Anthropic Mythos Zero-Day Exploit Finder Art

Claude Mythos Leak Ignites Fears of Unstoppable AI Exploits

OpenRouter LLM Leaderboard April

Chinese AI Models Dominate OpenRouter Top Six in Token Usage

Claude Code’s Big npm Leak

Inside the Claude Code Leak and Anthropic’s Agent Design

China AI accelerator card shipments vs NVIDIA 2025 chart

NVIDIA’s AI Chip Share in China Drops from 95% to 55%

TurboQuant KV Cache Compression Visualization

Google’s TurboQuant makes AI caches smaller and faster

Black Forest Labs FLUX.2 klein

FLUX.2 klein 9B-KV Explained: Speed, Quality, GPUs

Nvidia Slashes LLM Context Memory With KVTC Design

KVTC: Nvidia’s 20x LLM Memory Cut Without Retraining

OpenAI Sora shutdown concept

Sora’s Short Life: Inside OpenAI’s Quiet Retreat

Stitch (stitch.withgoogle.com) experimental Google Labs tool

Google Stitch: From simple prompt to working app UI