Google Stitch: From simple prompt to working app UI

Łukasz Grochal
Source:Stitch With

Stitch (stitch.withgoogle.com) is an experimental Google Labs tool that turns text prompts, sketches, or screenshots into ready‑to-edit UI designs and front‑end code for web and mobile apps. It is aimed at designers, developers, product folks, and even non‑technical teams who want to move from idea to interface much faster, without starting from a blank canvas in Figma or a code editor.

You describe what you need in natural language, optionally add an image or wireframe, and Stitch generates multi‑screen layouts, responsive variants, and HTML/CSS you can inspect or export. The tool runs on Google’s Gemini 2.5 models and offers two main modes: a faster Standard mode for quick drafts, and a slower Experimental mode that accepts visual input and pushes for higher fidelity. From there you can iterate conversationally, branch variations, tweak themes, and then send everything directly into Figma with Auto Layout preserved, or hand it off to developers through generated code or integration with Google AI Studio.​

Instead of replacing full design suites, Stitch sits earlier in the workflow as an ideation and layout engine that helps teams explore concepts, validate structures, and get “good enough” first passes before doing detailed visual design and interaction work elsewhere. It is currently available for free via Google Labs, with usage limits depending on the chosen mode, which makes it easy for individuals and small teams to experiment without budget friction. In practice it works best when you treat it like a visual collaborator: give it a clear description (for example, a mobile habit tracker, an admin dashboard for beekeeping, or an onboarding flow for a library app), review what it proposes, and then refine the strongest direction with follow‑up prompts or manual edits after exporting.

There are still trade‑offs: it is not as deep as Figma or Adobe XD in terms of component systems, design tokens, or prototyping, and complex product design still demands human judgment, but it can dramatically reduce the time needed to get from a rough idea to tangible screens that stakeholders can react to.

References(1)
Sources
Yann LeCun’s AMI vision for physically grounded AI

Yann LeCun’s AMI Lab Pioneers Physical‑World AI

Project Maven Dashboards Visualizing Targets and Risks

Claude, Palantir and Who Controls AI in Modern War

OpenSandbox Logo

OpenSandbox: A Unified Sandbox Layer For AI Agents

Qwen Beats gpt-oss-120B with Laptop Power

Alibaba's Tiny Qwen Beats Big OpenAI Model

QuitChatGPT – Street Art Mural

Is it time to quit ChatGPT? Inside the QuitGPT revolt

OpenAI ChatGPT 5.4

GPT 5.4: Native Computer Use Meets Finance Workflows

Google’s Nano Banana 2: Fast, Pro‑Level AI Image Generation

Nano Banana 2 Delivers Pro‑Grade Images at Flash Speed

Cloud AI agents orchestrating workflows in a browser UI

How Perplexity Computer Orchestrates 19 Models For You

AI Distillation Attack: Anthropic vs DeepSeek Claude Theft Illustration

Claude Distillation Drama: Anthropic vs Chinese AI Labs

Trump vs Anthropic: Ethical AI Clash Graphic

Pentagon vs Anthropic: Who Controls Military AI Use