opncrafter
Module 4 of 10: Generative UI

Rendering Tool Outputs as UI

Rendering Tool Outputs as UI

Jan 2, 2026 β€’ 20 min read

This is where the magic happens. When the LLM calls a tool (e.g., `get_stock_price`), instead of returning raw JSON text, we return a Interactive Component.

1. The Mental Model: "Tool as a Component Factory"

In traditional LLM apps, tools return Data Strings.
In Generative UI, tools return React Components.

2. The `render()` Function Mechanics

The Vercel AI SDK provides a `render()` method that cleanly maps tool names to generator functions.

import { render } from 'ai/rsc';
import { z } from 'zod';
import StockChart from '@/components/StockChart';
import StockSkeleton from '@/components/StockSkeleton';

const ui = render({
  model: 'gpt-4o',
  messages: history,
  tools: {
    get_stock_price: {
      description: 'Get stock price and history',
      parameters: z.object({ symbol: z.string() }),
      
      // THE MAGIC: Return a Component!
      generate: async function* ({ symbol }) {
        // Yield 1: Instant Feedback (The Skeleton)
        yield <StockSkeleton symbol={symbol} />; 
        
        try {
            // Yield 2: The API Call (Server-side)
            const data = await fetchStockData(symbol);
            
            // Yield 3: The Final Interactive Chart
            return <StockChart data={data} />; 
        } catch (e) {
            // Yield 4: Graceful Error Handling
            return <ErrorMessage error="Failed to fetch stock data" />;
        }
      }
    }
  }
});

return ui;

3. Deep Dive: Generator Functions (`function*`)

Notice the `function*`. This is a JavaScript Generator. It allows a function to return multiple times (stream updates).

yield <Skeleton />

Sent immediately via HTTP chunk. Browser renders it. Suspense boundary resolves.

await fetch()

Server keeps connection open. Browser shows Skeleton. Server waits for DB/API.

return <Chart />

Server sends final payload. Browser replaces Skeleton with Chart. Stream closes.

4. Making the Component Interactive

The component you yield (`<StockChart />`) is a normal React component. It runs on the client. It can even call Server Actions to loop back to the AI!

// Client Component: StockChart.jsx
'use client';
import { useActions } from 'ai/rsc';

export default function StockChart({ data, symbol }) {
  const { buyStock } = useActions(); // Access Server Action

  return (
    <div className="chart-card">
      <LineChart data={data} />
      
      <button 
        onClick={async () => {
            const result = await buyStock(symbol);
            toast.success("Bought " + symbol);
        }}
        className="btn-primary"
      >
        Buy {symbol}
      </button>
    </div>
  );
}

This creates a powerful "Human-in-the-Loop" workflow. The AI proposes an action (showing the Buy button), and the User confirms it (clicking).

5. Best Practices

  • Error Boundaries: Always wrap your tool logic in `try/catch` and return a friendly error component instead of crashing the whole stream.
  • Zod Descriptions: The LLM decides which component to show based on the tool description. Be descriptive. "Display a candlestick chart for a stock symbol."
  • Skeleton Height: Ensure your skeleton component has the same pixel height as the final component to avoid CLS (Cumulative Layout Shift).

Conclusion

By treating Tools as Component Factories, we transform the chat from a text log into a dynamic dashboard. The user doesn't just read about data; they interact with it.

Advanced Patterns: Beyond Basic Tool Results

Once you understand the basics of tool-to-component mapping, there are several powerful patterns that unlock even richer user experiences. These patterns separate beginner implementations from production-grade Generative UI apps.

Pattern 1: Multi-Step Tool Chains

Sometimes a single tool result isn't enough. Imagine a user asking "What's the best stock to buy today?" This requires: (1) searching market trends, (2) analyzing specific stocks, (3) showing a comparison UI. Chain multiple tools where each result feeds into the next:

tools: {
  search_market_trends: {
    generate: async function* ({ sector }) {
      yield <TrendsSkeleton />;
      const trends = await fetchTrends(sector);
      // Return data to be used by next tool
      return <TrendsCard trends={trends} />;
    }
  },
  compare_stocks: {
    generate: async function* ({ symbols }) {
      yield <ComparisonSkeleton count={symbols.length} />;
      const data = await Promise.all(symbols.map(fetchStock));
      return <StockComparison stocks={data} />;
    }
  }
}

Pattern 2: Conditional Component Rendering

Not every tool call should show the same component. Use logic inside the generator to decide which component best represents the data. This is more intelligent than hardcoding a single component per tool:

generate: async function* ({ symbol }) {
  yield <StockSkeleton />;
  const data = await fetchStock(symbol);
  
  // Smart component selection based on data characteristics
  if (data.priceHistory.length > 365) {
    return <DetailedCandlestickChart data={data} />;
  } else if (data.priceHistory.length > 30) {
    return <LineChart data={data} />;
  } else {
    return <SimplePriceCard data={data} />;
  }
}

Pattern 3: Progressive Enhancement

Don't just yield onceβ€”use multiple yields to add context as data becomes available. Show a basic card first, then enhance it with richer data as API calls complete:

generate: async function* ({ symbol }) {
  yield <StockSkeleton />;
  
  // Phase 1: Show basic price fast
  const price = await fetchCurrentPrice(symbol);
  yield <BasicPriceCard symbol={symbol} price={price} loading={true} />;
  
  // Phase 2: Enhance with full history
  const history = await fetchPriceHistory(symbol);
  return <FullStockCard symbol={symbol} price={price} history={history} />;
}

Performance Considerations

Tool result components run on the server before streaming to the client. This is powerful but introduces performance considerations you must account for in production:

1. Parallelize Independent Data Fetches

If your component needs multiple independent data sources, fetch them in parallel with Promise.all()rather than awaiting them sequentially. This can cut latency by 60-80%:

// ❌ Slow: Sequential fetches (3s total)
const price = await fetchPrice(symbol);    // 1s
const news = await fetchNews(symbol);       // 1s  
const analyst = await fetchAnalyst(symbol); // 1s

// βœ… Fast: Parallel fetches (1s total)
const [price, news, analyst] = await Promise.all([
  fetchPrice(symbol),
  fetchNews(symbol),
  fetchAnalyst(symbol)
]);

2. Cache Repeated Tool Calls

If users frequently ask for the same stock, weather, or product data, implement server-side caching. Use Redis or Next.js's built-in fetch cache with revalidation:

// Cache stock data for 60 seconds
const data = await fetch(`/api/stock/${symbol}`, {
  next: { revalidate: 60 }
}).then(r => r.json());

3. Virtualize Large Lists

If a tool returns a list component with many items (e.g., 100+ search results), use virtualization (react-window or react-virtual) to only render visible items. Rendering 100 DOM nodes at once will slow the client significantly.

Frequently Asked Questions

What's the difference between `yield` and `return` in tool generators?

yield sends an intermediate update to the client while keeping the function running.return sends the final value and closes the generator. Use yield for skeleton loading states, use return for the final component. You can have as many yield statements as you need, but only one return.

Can I use third-party component libraries in tool results?

Yes! Any React component library (Recharts, Victory, shadcn/ui, Tremor) works perfectly. Just ensure client-side components are marked with 'use client' if they use browser APIs or React hooks. Server components used inside tool generators cannot use hooks or browser events.

How do I pass actions from a generated component back to the AI?

Use the useActions() hook from ai/rsc inside your client component. This gives access to all Server Actions defined in your AI context. When a user clicks a button in the generated component, it triggers a new AI interaction seamlessly.

What happens if the tool data fetch fails?

Always wrap your fetch logic in try/catch and return a user-friendly error component. Never let an unhandled exception crash the entire stream. The error component should tell the user what went wrong and offer a retry option:

try {
  const data = await fetchData();
  return <DataComponent data={data} />;
} catch (error) {
  return (
    <ErrorCard 
      message="Failed to load data" 
      retry={() => console.log("retry")}
    />
  );
}

Can generated components have their own state?

Yes, but only client-side state. Since tool results are server-rendered, only components marked'use client' can use React hooks like useState and useEffect. Server components are statelessβ€”they render once and send HTML. Use a client wrapper around your interactive parts.

How do I debug why a tool isn't being called?

The most common reason is a poorly written tool description. The LLM decides which tool to call based on thedescription field. Be specific and descriptive: instead of "Get stock data", write "Display an interactive stock price chart with historical data for a given stock symbol (e.g., AAPL, TSLA)." Also check that your Zod schema accurately describes the expected parameters.

Next Steps

Now that you understand tool result components deeply, continue building your Generative UI skills:

  • Try Multi-Tool Workflows: Build an app where one tool's output triggers another tool call, creating a chain of component renders.
  • Add Animation: Use Framer Motion to animate skeleton-to-component transitions for polished user experience.
  • Implement Error Boundaries: Wrap each tool's render in a React Error Boundary for bulletproof production reliability.
  • Study the Next Guide: Skeletons and Suspense covers advanced loading patterns that pair directly with tool result components.
πŸ‘¨β€πŸ’»
Written by

Vivek

AI Engineer

Full-stack AI engineer with 4+ years building LLM-powered products, autonomous agents, and RAG pipelines. I've shipped AI features to production for startups and worked hands-on with GPT-4o, LangChain, LlamaIndex, and the Vercel AI SDK. I started OpnCrafter to share everything I wish I had when learning β€” no fluff, just working code and real-world context.

GPT-4oLangChainNext.jsVector DBsRAGVercel AI SDK

Continue Reading

πŸ‘¨β€πŸ’»
Written by

Vivek

AI Engineer

Full-stack AI engineer with 4+ years building LLM-powered products, autonomous agents, and RAG pipelines. I've shipped AI features to production for startups and worked hands-on with GPT-4o, LangChain, LlamaIndex, and the Vercel AI SDK. I started OpnCrafter to share everything I wish I had when learning β€” no fluff, just working code and real-world context.

GPT-4oLangChainNext.jsVector DBsRAGVercel AI SDK