opncrafter

Fullstack Agents: Vercel AI SDK

Dec 29, 2025 • 20 min read

If you're building the frontend for an AI agent in React or Next.js, the Vercel AI SDK is the standard library. It abstracts away the complexity of streaming token-by-token text from server to client, handles tool call rendering, manages conversation state, and works with every major LLM provider through a unified interface. Building AI apps without it is like building a web app without a framework.

1. Installation & Setup

npm install ai @ai-sdk/openai @ai-sdk/anthropic

# The SDK is split into:
# - ai: Core utilities (streamText, generateText, streamUI, tool)
# - @ai-sdk/openai: OpenAI provider (GPT-4o, o1, embeddings)
# - @ai-sdk/anthropic: Anthropic provider (Claude 3.5)
# - @ai-sdk/google: Google provider (Gemini)
# - ai/react: React hooks (useChat, useCompletion, useAssistant)

2. The useChat Hook: Streaming Conversations

The useChat hook manages the entire client-side state of a streaming chat: message list, loading state, input field, and form submission. All you need is a matching API route on the server:

// components/ChatInterface.tsx (Client Component)
'use client';
import { useChat } from 'ai/react';

export default function ChatInterface() {
  const {
    messages,         // All messages in the conversation
    input,            // Current value of the input field
    handleInputChange, // onChange handler for the input
    handleSubmit,     // onSubmit handler for the form
    isLoading,        // true while streaming
    error,            // Error object if request failed
    reload,           // Re-run the last user message
    stop,             // Cancel the current stream
  } = useChat({
    api: '/api/chat',          // Your backend route
    initialMessages: [],       // Pre-populate with history
    onError: (err) => console.error(err),
    onFinish: (msg) => saveToDatabase(msg), // After stream completes
  });

  return (
    <div style={{ display: 'flex', flexDirection: 'column', height: '100vh' }}>
      {/* Messages */}
      <div style={{ flex: 1, overflowY: 'auto', padding: '1rem' }}>
        {messages.map(message => (
          <div key={message.id} style={{
            alignSelf: message.role === 'user' ? 'flex-end' : 'flex-start',
            background: message.role === 'user' ? '#3b82f6' : '#1e1e2e',
            padding: '0.75rem 1rem',
            borderRadius: '12px',
            marginBottom: '0.5rem',
            maxWidth: '70%',
          }}>
            <strong>{message.role === 'user' ? 'You' : 'Agent'}</strong>
            <p style={{ margin: 0 }}>{message.content}</p>
          </div>
        ))}
        {isLoading && <div className="typing-indicator">ā—ā—ā—</div>}
      </div>

      {/* Input form */}
      <form onSubmit={handleSubmit} style={{ display: 'flex', padding: '1rem', gap: '0.5rem' }}>
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Ask anything..."
          disabled={isLoading}
          style={{ flex: 1, padding: '0.75rem', borderRadius: '8px' }}
        />
        <button type="submit" disabled={isLoading}>Send</button>
        {isLoading && <button type="button" onClick={stop}>Stop</button>}
      </form>
    </div>
  );
}

3. The Server Route: streamText

// app/api/chat/route.ts (Next.js App Router)
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

export const maxDuration = 30; // Allow up to 30s for slow models

export async function POST(req: Request) {
  const { messages } = await req.json();

  // streamText returns a Response-compatible stream
  const result = streamText({
    model: openai('gpt-4o'),
    system: 'You are a helpful assistant.',
    messages,  // Pass the full conversation history from the client
    temperature: 0.7,
    maxTokens: 1000,
  });

  // toDataStreamResponse() formats the stream in AI SDK protocol
  // (which useChat on the client knows how to decode)
  return result.toDataStreamResponse();
}

4. Tool Calls: Rendering Dynamic UI

The AI SDK handles the full tool call lifecycle: the model decides to call a tool, the server executes it, and the result streams back to the client. You can render custom UI for each tool result:

// Server: define tools the model can call
import { streamText, tool } from 'ai';
import { z } from 'zod';

const result = streamText({
  model: openai('gpt-4o'),
  tools: {
    getWeather: tool({
      description: 'Get current weather for a location',
      parameters: z.object({
        city: z.string().describe('City name'),
        units: z.enum(['celsius', 'fahrenheit']).default('celsius'),
      }),
      execute: async ({ city, units }) => {
        // Your actual tool implementation
        const data = await fetchWeatherAPI(city, units);
        return { temperature: data.temp, condition: data.condition, city };
      }
    }),
  },
  messages,
});

// Client: render tool invocations as custom UI
{messages.map(msg => (
  <div key={msg.id}>
    {msg.content && <p>{msg.content}</p>}
    {msg.toolInvocations?.map(ti => (
      <div key={ti.toolCallId}>
        {ti.toolName === 'getWeather' && ti.state === 'result' && (
          <WeatherCard
            city={ti.result.city}
            temperature={ti.result.temperature}
            condition={ti.result.condition}
          />
        )}
        {ti.state === 'call' && <Spinner label="Fetching weather..." />}
      </div>
    ))}
  </div>
))}

5. generateText for Non-Streaming Use Cases

import { generateText, generateObject } from 'ai';
import { z } from 'zod';

// For background jobs that don't need streaming
const { text } = await generateText({
  model: openai('gpt-4o'),
  prompt: 'Summarize this article: ' + articleText,
});

// For structured output — ensures correct JSON schema
const { object } = await generateObject({
  model: openai('gpt-4o'),
  schema: z.object({
    sentiment: z.enum(['positive', 'negative', 'neutral']),
    confidence: z.number().min(0).max(1),
    topics: z.array(z.string()),
  }),
  prompt: 'Analyze the sentiment of this review: ' + reviewText,
});

console.log(object.sentiment); // TypeScript knows the shape!

6. Multi-Provider Support

import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';

// Switch provider by changing one line — same API, same hooks
const model = process.env.AI_PROVIDER === 'anthropic'
  ? anthropic('claude-3-5-sonnet-20241022')
  : process.env.AI_PROVIDER === 'google'
  ? google('gemini-1.5-pro')
  : openai('gpt-4o');  // Default

const result = streamText({ model, messages });

7. Middleware: Rate Limiting & Auth

import { wrapLanguageModel, extractReasoningMiddleware } from 'ai';

// Add middleware to any model
const model = wrapLanguageModel({
  model: openai('gpt-4o'),
  middleware: extractReasoningMiddleware({ tagName: 'think' }),
});

// Or use custom middleware for logging / guardrails
const guardedModel = wrapLanguageModel({
  model: openai('gpt-4o'),
  middleware: {
    transformParams: async ({ params }) => {
      // Modify params before sending to model
      return params;
    },
    wrapGenerate: async ({ doGenerate, params }) => {
      console.log('Calling LLM with input:', params.prompt);
      const result = await doGenerate();
      console.log('Response tokens used:', result.usage);
      return result;
    }
  }
});

Frequently Asked Questions

Should I use the Vercel AI SDK or call the OpenAI API directly?

Use the AI SDK whenever you're building a React/Next.js app. The useChat hook alone saves hours of work managing streaming state, error handling, and message list updates. Direct API calls make sense for Python backends or non-React frontends.

How do I persist chat history across page reloads?

Pass initialMessages to useChat loaded from your database. Save messages using the onFinish callback. On the server, associate conversations with user sessions and retrieve history from your DB at the start of each session.

Conclusion

The Vercel AI SDK provides the plumbing for AI UIs that would otherwise take weeks to build correctly: streaming, state management, tool call rendering, provider abstraction, and React hooks. It's become the industry standard for Next.js-based AI applications — understanding it deeply will let you build sophisticated AI products faster than any alternative approach.

Continue Reading

šŸ‘Øā€šŸ’»
Written by

Vivek

AI Engineer

Full-stack AI engineer with 4+ years building LLM-powered products, autonomous agents, and RAG pipelines. I've shipped AI features to production for startups and worked hands-on with GPT-4o, LangChain, LlamaIndex, and the Vercel AI SDK. I started OpnCrafter to share everything I wish I had when learning — no fluff, just working code and real-world context.

GPT-4oLangChainNext.jsVector DBsRAGVercel AI SDK