opncrafter
Module 1 of 10: Generative UI

The Shift from Chatbots to Generative UI

For the last 2 years, "AI" meant "Chatbot". We type text, we get text. But text is a terrible interface for complex tasks. In 2026, the paradigm has shifted. Welcome to Generative UI.

Why Generative UI Matters

Think about the last time you used ChatGPT. You probably asked it something like "Create a table comparing different React frameworks," and it gave you a markdown table. Then you had to mentally parse that table, maybe copy it into a spreadsheet, and manually create charts or visualizations to actually understand the data. This is the limitation of text-based AI.

Generative UI solves this fundamental problem by allowing AI to respond with interactive, visual components instead of just text. Instead of receiving a markdown table, you get a fully interactive data table with sorting, filtering, and built-in visualizations. Instead of reading about stock prices, you see a live, updating chart you can manipulate with clicks and gestures.

This isn't just about making things prettier. It's about bandwidth and efficiency. Humans process visual information 60,000 times faster than text. When an AI can show you a calendar picker instead of a text-based date selection flow, the entire interaction becomes orders of magnitude more efficient. This is why companies like Perplexity, OpenAI, and Anthropic are all racing to implement generative UI capabilities in their products.

The shift to Generative UI represents the maturation of AI from a novelty to a genuine productivity tool. It's the difference between reading a weather forecast and seeing an interactive radar map. Both convey information, but only one lets you truly explore and understand.

1. The Four Eras of Interface

To understand why GenUI is inevitable, we must look at history.

📺 1. CLI (1970s)

Command Line. Efficient but high barrier to entry. "rm -rf /"

🖱️ 2. GUI (1990s)

Graphical UI. Point and click. Intuitive but rigid.

💬 3. LUI (2023)

Language UI (Chatbots). Flexible but low-bandwidth.

4. GenUI (2025+)

Generative UI. The best of GUI and LUI. Fluid interfaces that adapt to intent.

2. The Limitation of Chat

Imagine trying to book a flight via SMS. "What days?" "Tuesday." "Which Tuesday?" "Next one." "Here are 5 flights..." It's exhausting. A calendar date-picker is 100x more efficient.

Yet, most AI Agents today force us into this linear text conversation. Generative UI changes this by allowing the Model to render React Components instead of just markdown.

Standard Chatbot Flow

User: "Show me Apple's stock price."
Bot: "Apple (AAPL) is currently trading at $185.20, up 1.2% today."

Generative UI Flow

User: "Show me Apple's stock price."
Bot: [Renders Interactive Financial Chart Component]
User (clicks "1 Year"): Chart updates instantly.

3. How It Works: The RSC Architecture

This isn't magic. It relies on a contract between the Client and the Server. We use the Vercel AI SDK 3.0+ and React Server Components (RSC).

The Usage Loop

1. User Prompt"Show TSLA chart"
⬇️
2. LLM (Server)Calls displayChart("TSLA")
⬇️
3. Server ActionFetches Data <Chart data={...} />
⬇️
4. Client StreamHydrates Component

4. Code Preview: The `streamUI` Function

The core magic happens in a function called streamUI (or similar abstractions in SDK 4.0).

const result = await streamUI({
  model: openai('gpt-4o'),
  messages: [...history, { role: 'user', content: input }],
  tools: {
    showStockPrice: {
      description: 'Show stock price chart',
      parameters: z.object({ symbol: z.string() }),
      generate: async ({ symbol }) => {
        const data = await fetchStock(symbol);
        return <StockChart data={data} />; // <--- RETURN REACT JSX!
      },
    },
  },
});

Notice the return type. We are not returning JSON. We are returning JSX. This is executed on the server, and the result is streamed to the client.

The RSC Payload: What Actually Crosses the Network?

A common misconception is that the server sends raw HTML strings via WebSockets. In reality, React Server Components (RSC) send a highly optimized tree format over HTTP streaming. When streamUI executes, the network payload looks something like this:

0:["$@1",["title","StockChart"]]
1:{"symbol":"TSLA","price":185.20,"trend":"up"}
2:["$","div",null,{"className":"chart-wrapper","children":["$","svg",null,...]}]

This binary-like payload tells the React Client exactly how to hydrate the DOM. It is orders of magnitude smaller than sending a fully baked HTML document, and much safer than sending raw `innerHTML` that is susceptible to XSS injections.

5. streamUI vs streamObject

As of the Vercel AI SDK 3.1+, developers have two primary ways to generate structured data from LLMs: streamUI and streamObject. Knowing when to use which is the most critical architectural decision in GenUI.

FeaturestreamUI (Server-side)streamObject (Client-side)
Return TypeReact Node (JSX)JSON Object (matching a Zod schema)
Bundle Size ImpactZero (libraries stay on server)Heavy (chart libraries downloaded to client)
InteractivityRequires Client Component wrapperNative React state binding
Best ForComplex, heavy Dashboards, PDF viewersForms, multi-step wizards, highly interactive UI

Rule of Thumb: If the component requires heavy NPM libraries (like Recharts, D3, or PDF.js) and the user just needs to look at it, use streamUI. If the component is a form the user needs to type into constantly, use streamObject so the state lives locally on the client.

6. When NOT to use GenUI

GenUI is exciting, but it adds complexity.

  • Latency: Rendering a component takes longer than spitting out a token string.
  • Mobile Support: Ensure your generated components are responsive. A massive table on mobile is worse than text.
  • Security: If the model hallucinates a prop (e.g., onClick="eval(...)"), you have a problem. RSC mitigates this by serializing props safely.

Real-World Use Cases: Where GenUI Shines

Generative UI isn't just a theoretical concept—it's already being deployed in production systems across multiple industries. Here are five concrete examples where GenUI dramatically improves user experience:

1. Financial Trading Dashboards

Instead of asking "What's the 52-week high for Tesla?", users can say "Show me TSLA's performance this year." The AI generates an interactive Candlestick chart with volume indicators, moving averages, and real-time updates. Users can then click time ranges (1D, 1W, 1M, 1Y) or add technical indicators through the generated UI—all without leaving the conversation. Companies like Bloomberg and Robinhood are experimenting with similar interfaces.

2. Travel Booking Systems

Traditional chatbots make booking flights painful: "Where from?" "San Francisco." "Where to?" "Tokyo." "Dates?" "Next month." With GenUI, the AI can immediately render a calendar date picker, a map showing routes, and a price comparison table. Users select dates visually, filter by price/duration with sliders, and complete booking in seconds rather than minutes.

3. Healthcare Patient Portals

Medical data is highly visual—lab results, vital signs, medication schedules. Instead of describing "Your blood pressure was 120/80 yesterday," a GenUI system renders a line chart showing trends over the past 30 days, with threshold indicators for normal ranges. The AI can generate medication reminders as interactive pill cards with dosage timers, making health management more intuitive.

4. E-commerce Product Discovery

When a user asks "Show me laptops under $1000 with 16GB RAM," the AI doesn't list specs in text. It generates a filterable product grid with images, star ratings, and comparison charts. Users can click to add items to cart, view similar products, or adjust filters—all while the AI continues to answer follow-up questions.

5. Enterprise Data Analytics

Business analysts often ask questions like "Show last quarter's revenue by region." Instead of returning a CSV or text summary, GenUI systems generate interactive dashboards with bar charts, pie charts, and drill-down capabilities. Analysts can click regions to see city-level data, hover for tooltips, and export visualizations—all generated on-demand by the AI.

Common Pitfalls & Troubleshooting

While Generative UI is powerful, it introduces new challenges. Here are the top issues developers encounter and how to solve them:

1. Overcomplicating the UI

Problem: New developers often try to generate entire multi-page dashboards in a single response. This leads to slow rendering, poor mobile experience, and overwhelming users.

Solution: Start simple. Generate one focused component at a time. If showing stock data, return a single chart, not a chart + table + news feed. Use progressive disclosure—let users click "Show more details" to expand functionality.

2. Ignoring Loading States

Problem: Users see a blank screen while the AI fetches data and generates components. This creates perceived latency.

Solution: Always yield a skeleton component first using generator functions. Example:

generate: async function* ({ symbol }) {
  yield <StockSkeleton symbol={symbol} />; // Instant feedback
  const data = await fetchStock(symbol);
  return <StockChart data={data} />; // Final component
}

3. Accessibility Failures

Problem: Generated components often lack proper ARIA labels, keyboard navigation, and screen reader support.

Solution: Create accessible component templates. Ensure all interactive elements have `aria-label` attributes, support Tab navigation, and include text alternatives for visual data. Test with screen readers during development.

4. State Management Confusion

Problem: Developers struggle with where state lives—client or server? This causes bugs like lost user inputs or duplicate requests.

Solution: Follow the rule: Server state for data, client state for UI. Fetch stock prices on the server, but manage chart zoom level on the client. Use React Server Components for data, Client Components for interactivity.

Frequently Asked Questions

Can I use GenUI with LLMs other than OpenAI?

Yes! The Vercel AI SDK supports Anthropic (Claude), Google (Gemini), and any OpenAI-compatible API. The key requirement is function calling support. Models like Claude 3.5 Sonnet and Gemini 1.5 Pro work excellently.

Does GenUI work on mobile devices?

Absolutely. Since you're generating standard React components, they can use responsive CSS (Tailwind, CSS modules) and touch gestures. However, you should design mobile-first components—avoid wide tables or complex multi-column layouts that don't adapt to small screens.

How do I handle errors in generated components?

Wrap your `generate` function in try/catch blocks and return error components on failure:

try {
  const data = await fetchStock(symbol);
  return <StockChart data={data} />;
} catch (error) {
  return <ErrorCard message="Failed to load stock data" />;
}

What about SEO? Are generated components indexable?

React Server Components (RSC) render on the server and send HTML to the client, so yes, they're fully indexable by search engines. However, dynamic AI-generated content might not be crawled immediately. For critical pages, pre-render static content.

Can users interact with generated components to trigger more AI responses?

Yes! This is the "Human-in-the-Loop" pattern. Generated components can include buttons that call Server Actions, which in turn trigger new AI responses. For example, a stock chart can have a "Buy" button that asks the AI to create a purchase order form.

How much does GenUI cost compared to regular chatbots?

GenUI uses slightly more tokens because components are described in the system prompt and tool definitions. Expect 10-20% higher LLM costs. However, the improved user experience often leads to higher conversion rates, offsetting the extra cost. For high-traffic apps, caching component templates can reduce costs significantly.

What We Will Build

In this 10-part course, we will build a Stock Analysis Dashboard that:

  • Streams UI components from the server.
  • Generates dynamic charts using Recharts.
  • Creates custom forms on the fly using Zod.
  • Uses Voice (Whisper) to control the interface.

Ready to ditch the Markdown block? Let's get started.

👨‍💻
Written by

Vivek

AI Engineer

Full-stack AI engineer with 4+ years building LLM-powered products, autonomous agents, and RAG pipelines. I've shipped AI features to production for startups and worked hands-on with GPT-4o, LangChain, LlamaIndex, and the Vercel AI SDK. I started OpnCrafter to share everything I wish I had when learning — no fluff, just working code and real-world context.

GPT-4oLangChainNext.jsVector DBsRAGVercel AI SDK

Continue Reading

👨‍💻
Written by

Vivek

AI Engineer

Full-stack AI engineer with 4+ years building LLM-powered products, autonomous agents, and RAG pipelines. I've shipped AI features to production for startups and worked hands-on with GPT-4o, LangChain, LlamaIndex, and the Vercel AI SDK. I started OpnCrafter to share everything I wish I had when learning — no fluff, just working code and real-world context.

GPT-4oLangChainNext.jsVector DBsRAGVercel AI SDK