Deep Dive: Vercel AI SDK (RSC)
Jan 2, 2026 • 25 min read
The Vercel AI SDK's RSC API (ai/rsc) is the definitive framework for building Generative UI in Next.js. It solves the fundamental state synchronization problem: LLMs run on servers, but React renders on clients. How do you keep server-generated AI state in sync with client-rendered UI, handle streaming updates, and persist conversations across page refreshes? The SDK's AIState/UIState/Actions triad is the answer.
1. The State Triad: Why Three Layers?
AIState
UIState
Actions
2. createAI: The Root Provider Setup
// app/actions.tsx — Core AI provider configuration
'use server';
import { createAI, getMutableAIState, streamUI } from 'ai/rsc';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
import { CoreMessage } from 'ai';
import { db } from '@/lib/db';
// Action: Called by client components via useActions()
async function submitUserMessage(content: string) {
'use server';
const aiState = getMutableAIState<typeof AI>();
// 1. Append user message to AI state (server-side)
aiState.update([
...aiState.get(),
{ role: 'user', content } as CoreMessage,
]);
// 2. Stream UI response back to client
const ui = await streamUI({
model: openai('gpt-4o'),
system: "You are a helpful assistant.",
messages: aiState.get(), // Full conversation history as LLM context
tools: {
show_weather: {
description: 'Display weather for a city',
parameters: z.object({ city: z.string(), unit: z.enum(['C', 'F']) }),
generate: async function* ({ city, unit }) {
yield <div>Loading weather for {city}...</div>; // Immediate skeleton
const data = await fetchWeather(city, unit);
return <WeatherCard data={data} city={city} unit={unit} />;
},
},
},
// Called when LLM responds with text (not a tool call)
text: ({ content, done }) => {
if (done) {
// Append assistant response to AI state when streaming completes
aiState.done([
...aiState.get(),
{ role: 'assistant', content } as CoreMessage,
]);
}
return <AssistantMessage>{content}</AssistantMessage>;
},
});
return { id: Date.now(), display: ui.value };
}
// The AI provider — add to your layout.tsx
export const AI = createAI({
actions: { submitUserMessage },
// Type-safe initial states
initialAIState: [] as CoreMessage[],
initialUIState: [] as { id: number; display: React.ReactNode }[],
// PERSISTENCE HOOK: Called server-side after each completed exchange
onSetAIState: async ({ state, done, }) => {
'use server';
if (done) {
// Serialize and save conversation history to your database
await db.chatHistory.upsert({
where: { sessionId: getSessionId() },
create: { messages: JSON.stringify(state) },
update: { messages: JSON.stringify(state) },
});
}
// If done=false, streaming is in progress — don't save partial state
},
});3. Client Patterns: useUIState and useActions
// components/ChatInterface.tsx
'use client';
import { useUIState, useActions, useAIState } from 'ai/rsc';
import { AI } from '@/app/actions';
import { useState, useRef, useEffect } from 'react';
export function ChatInterface() {
const [messages, setMessages] = useUIState<typeof AI>();
const { submitUserMessage } = useActions<typeof AI>();
const [input, setInput] = useState('');
const [isLoading, setIsLoading] = useState(false);
const messagesEndRef = useRef<HTMLDivElement>(null);
// Auto-scroll to latest message
useEffect(() => {
messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
}, [messages]);
async function handleSubmit(e: React.FormEvent) {
e.preventDefault();
if (!input.trim() || isLoading) return;
const userInput = input;
setInput(''); // Clear input immediately (good UX)
setIsLoading(true);
// OPTIMISTIC UPDATE: Show user message instantly (before server responds)
setMessages(current => [
...current,
{
id: Date.now(),
display: <UserMessage>{userInput}</UserMessage>,
},
]);
try {
// SERVER CALL: Submit to AI action (may take 1-5s)
const response = await submitUserMessage(userInput);
// SYNC: Append the server-returned UI (streaming chart, text, etc.)
setMessages(current => [...current, response]);
} catch (error) {
// ROLLBACK: Remove optimistic message on failure
setMessages(current => current.slice(0, -1));
toast.error("Failed to send message. Please try again.");
} finally {
setIsLoading(false);
}
}
return (
<div style={{ display: 'flex', flexDirection: 'column', height: '100vh' }}>
{/* Message list */}
<div style={{ flex: 1, overflowY: 'auto', padding: '1rem' }}>
{messages.map(message => (
<div key={message.id}>{message.display}</div>
))}
{isLoading && <TypingIndicator />}
<div ref={messagesEndRef} />
</div>
{/* Input area */}
<form onSubmit={handleSubmit} style={{ padding: '1rem', borderTop: '1px solid rgba(255,255,255,0.1)' }}>
<div style={{ display: 'flex', gap: '0.5rem' }}>
<input
value={input}
onChange={e => setInput(e.target.value)}
placeholder="Ask anything..."
disabled={isLoading}
style={{ flex: 1, padding: '0.75rem', background: 'var(--bg-tertiary)', border: '1px solid rgba(255,255,255,0.1)', borderRadius: '8px', color: 'inherit' }}
/>
<button type="submit" disabled={!input.trim() || isLoading} style={{ padding: '0.75rem 1.5rem', background: 'var(--primary)', border: 'none', borderRadius: '8px', cursor: 'pointer' }}>
{isLoading ? '...' : 'Send'}
</button>
</div>
</form>
</div>
);
}4. createStreamableValue: Streaming Raw JSON Data
// Use createStreamableValue when you need to stream raw data, not UI
// Perfect for: progress bars, live counters, coordinate updates, status feeds
// SERVER ACTION
import { createStreamableValue } from 'ai/rsc';
export async function runLongTask() {
const streamable = createStreamableValue<{ status: string; progress: number; result?: string }>({
status: 'starting',
progress: 0,
});
// Run async work in background (IIFE pattern)
(async () => {
streamable.update({ status: 'fetching_data', progress: 10 });
const rawData = await fetchFromDatabase();
streamable.update({ status: 'analyzing', progress: 40 });
const analysis = await runAnalysis(rawData);
streamable.update({ status: 'generating_report', progress: 75 });
const report = await generateReport(analysis);
// Final value completes the stream
streamable.done({ status: 'complete', progress: 100, result: report });
})();
// Return the stream immediately (before async work finishes!)
return streamable.value;
}
// CLIENT COMPONENT
'use client';
import { useStreamableValue } from 'ai/rsc';
import { readStreamableValue } from 'ai/rsc';
export function ProgressDisplay({ stream }) {
const [data, error, isLoading] = useStreamableValue(stream);
return (
<div>
<div style={{ height: '4px', background: 'var(--bg-tertiary)', borderRadius: '2px' }}>
<div style={{
height: '100%',
width: `${data?.progress ?? 0}%`,
background: 'var(--primary)',
borderRadius: '2px',
transition: 'width 0.3s ease',
}} />
</div>
<p>{data?.status} — {data?.progress}%</p>
{data?.result && <ResultDisplay content={data.result} />}
</div>
);
}Frequently Asked Questions
What's the difference between streamUI and createStreamableUI?
streamUI is the newer, recommended API in Vercel AI SDK v3. It uses async generator functions (yield for intermediate states, return for final state) and integrates directly with OpenAI tool calls for type-safe tool definitions. createStreamableUI is the lower-level primitive that streamUI builds on — it provides a mutable UI stream you can .update() and .done() from anywhere in async code. Use streamUI for standard tool-based AI pipelines. Use createStreamableUI when you need more direct control over the streaming lifecycle, such as streaming from webhooks or from background jobs that aren't directly tied to a single LLM call.
How do I load past conversation history from the database on page load?
In your page component (a Server Component), fetch the stored AIState from your database using the session ID, then pass it as the initialAIState prop to your AI provider. The provider re-hydrates both the AIState (for LLM context continuity) and reconstructs the UIState (re-render the components from the serialized message history). Store message content, timestamps, and role in your DB — but store tool result components as structured JSON you can re-render client-side, not as serialized React trees which can't be safely restored.
Conclusion
The Vercel AI SDK's RSC layer is the glue that makes Generative UI architecturally coherent. AIState holds the ground truth conversation history that LLMs read, UIState holds the client-rendered component tree users see, and Actions are the server functions that translate between them. streamUI with async generators makes yielding loading skeletons before final components a natural pattern rather than an afterthought. createStreamableValue extends this to non-UI streaming for progress bars and live data. Combined with the persistence hook, you get a complete stateful AI application framework in a few hundred lines of TypeScript.
Vivek
AI EngineerFull-stack AI engineer with 4+ years building LLM-powered products, autonomous agents, and RAG pipelines. I've shipped AI features to production for startups and worked hands-on with GPT-4o, LangChain, LlamaIndex, and the Vercel AI SDK. I started OpnCrafter to share everything I wish I had when learning — no fluff, just working code and real-world context.
Continue Reading
Vivek
AI EngineerFull-stack AI engineer with 4+ years building LLM-powered products, autonomous agents, and RAG pipelines. I've shipped AI features to production for startups and worked hands-on with GPT-4o, LangChain, LlamaIndex, and the Vercel AI SDK. I started OpnCrafter to share everything I wish I had when learning — no fluff, just working code and real-world context.