Generative UI: A rich, custom, visual interactive user experience for any prompt
November 18, 2025, Yaniv Leviathan, Google Fellow, Dani Valevski, Senior Staff Software Engineer, and Yossi Matias, Vice President & Head of Google Research
We introduce a novel implementation of generative UI, enabling AI models to create immersive experiences and interactive tools and simulations, all generated completely on the fly for any prompt. This is now rolling out in the Gemini app and Google Search, starting with AI Mode.
Generative UI is a powerful capability in which an AI model generates not only content but an entire user experience. Today we introduce a novel implementation of generative UI, which dynamically creates immersive visual experiences and interactive interfaces โ such as web pages, games, tools, and applications โ that are automatically designed and fully customized in response to any question, instruction, or prompt. These prompts can be as simple as a single word, or as long as needed for detailed instructions. These new types of interfaces are markedly different from the static, predefined interfaces in which AI models typically render content.
In our new paper, โGenerative UI: LLMs are Effective UI Generatorsโ, we describe the core principles that enabled our implementation of generative UI and demonstrate the effective viability of this new paradigm. Our evaluations indicate that, when ignoring generation speed, the interfaces from our generative UI implementations are strongly preferred by human raters compared to standard LLM outputs. This work represents a first step toward fully AI-generated user experiences, where users automatically get dynamic interfaces tailored to their needs, rather than having to select from an existing catalog of applications.
Our research on generative UI, also referred to as generative interfaces, comes to life today in the Gemini app through an experiment called dynamic view and in AI Mode in Google Search.

Generative UI is useful for a range of applications. For any user question, need, or prompt, as simple as a single word or as complex as elaborate instructions, the model creates a fully custom interface. Left: Getting tailored fashion advice. Middle: Learning about fractals. Right: Teaching mathematics.
For more examples see the project page.
Bringing generative UI to Google products
Generative UI capabilities will be rolled out as two experiments in the Gemini app: dynamic view and visual layout. When using dynamic view, an experience built upon our generative UI implementation, Gemini designs and codes a fully customized interactive response for each prompt, using Geminiโs agentic coding capabilities. It customizes the experience with an understanding that explaining the microbiome to a 5 year old requires different content and a different set of features than explaining it to an adult, just as creating a gallery of social media posts for a business requires a completely different interface to generating a plan for an upcoming trip.
Dynamic view can be used for a wide range of scenarios, from learning about probability to helping in practical tasks like event planning and getting fashion advice. The interfaces allow users to learn, play or explore interactively. Dynamic view, along with visual layout, are rolling out today. To help us learn about these experiments, users may initially see only one of them.
Example of generative UI in dynamic view based on the prompt, โCreate a Van Gogh gallery with life context for each pieceโ.
Generative UI experiences are also integrated into Google Search starting with AI Mode, unlocking dynamic visual experiences with interactive tools and simulations that are generated specifically for a userโs question. Now, thanks to Gemini 3โs unparalleled multimodal understanding and powerful agentic coding capabilities, Gemini 3 in AI Mode can interpret the intent behind any prompt to instantly build bespoke generative user interfaces. By generating interactive tools and simulations on the fly, it creates a dynamic environment optimized for deep comprehension and task completion. Generative UI capabilities in AI Mode are available for Google AI Pro and Ultra subscribers in the U.S. starting today. Select “Thinking” from the model drop-down menu in AI Mode to try it out.
Example of AI Mode in Google Search with the prompt, โshow me how rna polymerase works. what are the stages of transcription and how is it different in prokaryotic and eukaryotic cellsโ.
How the generative UI implementation works
Our generative UI implementation, described in the paper, uses Googleโs Gemini 3 Pro model with three important additions:
- Tool access: A server provides access to several key tools, like image generation and web search. This allows the results to be made accessible to the model to increase quality or sent directly to the userโs browser to improve efficiency.
- Carefully crafted system instructions: The system is guided by detailed instructions that include the goal, planning, examples and technical specifications, including formatting, tool manuals, and tips for avoiding common errors.
- Post-processing: The modelโs outputs are passed through a set of post-processors to address potential common issues.

A high-level system overview of the generative UI implementation.
For some products, it might be preferable to consistently see results in specific styles. Our implementation could be configured for these products so that all results, including generated assets, are created in a consistent style for all users. Without specific styling instructions, the generative UI will select a style automatically, or the user can influence styling in their prompt, as in the case of dynamic view in the Gemini app.
Continue/Read Original Article Here: Generative UI: A rich, custom, visual interactive user experience for any prompt
Discover more from DrWeb's Domain
Subscribe to get the latest posts sent to your email.

