Gemma 3n: Open Source Multimodal AI

Łukasz Grochal

Hugging Face and Google have launched Gemma 3n, an open source multimodal large language model built to run efficiently on consumer hardware. Available in two compact sizes, E2B and E4B, it requires just 2–4 GB of GPU memory while delivering top-tier performance. Gemma 3n natively handles text, images, audio, and video through integrated MobileNet vision, USM audio, and a cutting-edge MatFormer backbone. It features per-layer embeddings and KV-cache sharing, driving impressive benchmarks over 1300 on LMArena—outperforming many sub-10B models.

Deeply integrated with Hugging Face’s open ecosystem, it supports straightforward fine-tuning, deployment, and community contributions. This release highlights a move toward high-quality, accessible, open source AI for developers and researchers on everyday devices.

References
2 sources
01
huggingface.coHugging Face
02
deepmind.googleDeepMind
DeepSeek V4‑Pro 1.6T‑Parameter AI Model Architecture

DeepSeek V4: 1M‑Token Context and Budget Frontier AI Models

Palantir Manifesto Graphic: AI Defense and Culture Clash

Palantir Manifesto Hits at Regressive Cultures and AI Shift

OpenAI ChatGPT Images 2.0 feature overview

OpenAI Updates ChatGPT Images With Better Text

Publishers Are Shutting Out Internet Archive

News Giants Block Wayback Machine Over AI Fears

Claude Design Launch: Brand-Aware AI Prototyping Image

Anthropic Launches Claude Design to Rival Figma Tools

Qwen3.6 Coding Agent Benchmarks Chart Visual

Exploring Qwen3.6: Coding Benchmarks and Speed

Palantier Dilemma Human Rights vs Sercurity

Europe's Palantir Boom Amid Sovereignty and Rights Fears

Project Glasswing: Anthropic Mythos Zero-Day Exploit Finder Art

Claude Mythos Leak Ignites Fears of Unstoppable AI Exploits

OpenRouter LLM Leaderboard April

Chinese AI Models Dominate OpenRouter Top Six in Token Usage

Claude Code’s Big npm Leak

Inside the Claude Code Leak and Anthropic’s Agent Design