Assistant-UI vs CopilotKit: how hard is it to migrate an existing in-app assistant UI without rewriting my backend?
AI Chat UI Toolkits

Assistant-UI vs CopilotKit: how hard is it to migrate an existing in-app assistant UI without rewriting my backend?

9 min read

If you already have an in‑app assistant and you’re comparing Assistant‑UI vs CopilotKit, the real question isn’t “which React library is better?” but “how painful will migration be—and do I have to touch my backend at all?”

This guide breaks down how each option fits into an existing stack, how much refactor you should expect, and practical migration patterns if you want to swap your current UI for Assistant‑UI without rewriting your backend.


What problem are you actually solving?

Most teams looking at Assistant‑UI or CopilotKit are trying to solve one or more of these:

  • “Our current chat UI is brittle and custom; we want a robust, ChatGPT‑style experience.”
  • “We want streaming, tools, and memory, but we don’t want to reinvent the wheel.”
  • “We’re already happy with our backend (Vercel AI SDK, LangChain, LangGraph, custom APIs) and don’t want to rewrite it.”

That last point is critical. A migration is “easy” only if you can drop in a new UI layer while keeping:

  • Your LLM provider and model
  • Your routing/orchestration (LangChain, LangGraph, custom framework)
  • Your business logic and tools
  • Your existing API endpoints

So the core evaluation becomes:

How UI‑opinionated vs backend‑opinionated is Assistant‑UI vs CopilotKit, and how well do they plug into what you already have?


Assistant‑UI in a nutshell

Assistant‑UI is an open‑source TypeScript/React library for AI chat. It’s explicitly designed as a front‑end layer:

  • React chat UI that feels like ChatGPT
  • Pre‑built, production‑ready components so you don’t have to build your own chat interface
  • Works with any LLM provider and popular ecosystems like Vercel AI SDK, LangChain, and LangGraph
  • Built‑in support for:
    • Streaming responses
    • Interruptions, retries, and multi‑turn conversations
    • High‑performance rendering and small bundle size
  • Assistant UI Cloud can store threads so sessions persist across refreshes without you building that infrastructure

Because it’s UI‑first, its core abstraction is:

“Give me an API (or handler) that can stream assistant messages and tool outputs, and I’ll handle the rest of the experience.”

That’s exactly what you want if you don’t want to rewrite your backend.


CopilotKit in a nutshell (high‑level positioning)

CopilotKit is also a React‑friendly way to build AI copilots, but its typical pitch is less “swap in a chat UI on any backend” and more:

  • Deeper coupling with your app state and UI actions
  • Strong focus on in‑app copilots that directly manipulate frontend state and components
  • Often encourages an opinionated pattern for:
    • How you define tools / actions
    • How context is passed from your app to the AI
    • How the AI “drives” UI changes

It’s powerful if you want an AI agent that orchestrates UI workflows, but that extra coupling can mean more work if you already have a backend agent architecture or API contract you like.

In other words:

  • Assistant‑UI: “React chat UI so you can focus on your agent logic.”
  • CopilotKit: “Tightly integrated copilot that’s aware of, and acts on, your frontend state and flows.”

That difference is what drives migration difficulty.


Migration difficulty: Assistant‑UI vs CopilotKit

1. Using an existing chat/agent backend

Scenario:
You already have one of the following in production:

  • A REST or WebSocket API that handles:
    • user messages
    • tool calls
    • streaming token responses
  • A backend built with:
    • Vercel AI SDK
    • LangChain
    • LangGraph
    • Custom orchestration layer

Assistant‑UI

  • Goal: Replace your current chat UI with Assistant‑UI, keep your backend unchanged.
  • Typical work:
    • Map your existing “send message” endpoint to Assistant‑UI’s configuration.
    • Ensure your API can stream responses in a supported format (or adapt with a small adapter).
    • Optionally plug in Assistant UI Cloud for thread persistence if you don’t already store sessions.

Backend rewrite required?
Usually no. At most, a thin adapter endpoint:

// Example: adapter-like API route for Assistant‑UI
export async function POST(req: Request) {
  const { threadId, messages } = await req.json();

  // Call your existing agent/LLM pipeline
  const stream = await runExistingAgent({
    threadId,
    messages,
  });

  // Return streaming response in the shape Assistant‑UI expects
  return new Response(stream, {
    headers: { "Content-Type": "text/event-stream" },
  });
}

Migration difficulty:
Low. You mostly swap the frontend component and wire it to an endpoint that proxies to your existing agent.

CopilotKit

  • Goal: Use CopilotKit’s copilot UI with your existing backend agent.
  • Typical work:
    • Align their tool/action definitions with your existing tool system.
    • Map CopilotKit’s expected message/tool formats to your backend’s contract.
    • Potentially refactor where context is generated (frontend vs backend).
    • Adjust how stateful interactions and side effects are handled.

Backend rewrite required?
Not strictly, but in practice you often end up re‑shaping:

  • How you define tools
  • How you structure the prompt or system messages
  • How you expose context and side effects

Migration difficulty:
Medium to high if your backend is already opinionated. More friction arises when your existing agent design doesn’t match CopilotKit’s assumptions.


2. If you want to keep a “dumb” backend and evolve over time

Some teams start with a simple POST /chat endpoint and slowly move to more advanced agents (LangGraph, LangChain, tool calling, memory, etc.).

Assistant‑UI

Because Assistant‑UI is backend‑agnostic:

  • You can start with a very simple endpoint today.

  • Later, swap internals for LangChain or LangGraph without changing the UI layer.

  • Assistant‑UI already plays nicely with LangGraph and LangChain; the ecosystem is explicitly highlighted in the product and community:

    • “Build stateful conversational AI agents with LangGraph and assistant-ui.”
    • “Pleasure to work with Simon… bring streaming, gen UI, and human-in-the-loop with LangGraph Cloud + assistant-ui.”
    • “React chat ui so you can focus on your agent logic.”

You’re free to evolve your backend architecture: the UI layer doesn’t force a new mental model.

CopilotKit

Because CopilotKit is often tied to how the copilot interacts with app state and actions:

  • The frontend and backend designs tend to evolve together.
  • Changes to backend capabilities frequently propagate into the way you define tools/actions on the frontend.
  • If your backend grows more sophisticated or moves to another orchestrator, you may need to re‑bridge those assumptions.

Result: Assistant‑UI is easier if you want a UI that remains stable while your backend evolves.


Concrete migration scenarios (no backend rewrite)

Scenario A: Custom chat UI → Assistant‑UI, existing LangChain/LangGraph backend

You currently have:

  • A custom React chat component
  • A /api/chat endpoint that:
    • Receives an array of messages
    • Uses LangChain or LangGraph to run an agent
    • Streams tokens back to the client

Typical steps:

  1. Install Assistant‑UI

    npm install assistant-ui
    
  2. Replace your chat UI

    import { AssistantUI } from "assistant-ui";
    
    export function ChatPage() {
      return (
        <AssistantUI
          endpoint="/api/chat"
          // optional: pass threadId, initialMessages, etc.
        />
      );
    }
    
  3. Keep your backend as is

    Your /api/chat logic—built on LangChain or LangGraph—can remain unchanged as long as:

    • It accepts messages in a standard format (role/content).
    • It can stream tokens/events in a way Assistant‑UI understands (you can adapt this with a thin wrapper if needed).
  4. Persist sessions (optional)

    • Use Assistant UI Cloud to store threads so sessions survive refreshes without writing your own session management.
    • Or wire to your own database if you already have a session model.

No need to restructure your agent logic, prompt templates, or tools. The migration is almost entirely front‑end.


Scenario B: Custom chat UI → CopilotKit, existing “agent” backend

You currently have:

  • Custom React chat UI
  • Backend agent endpoint with its own:
    • message format
    • tool invocation pattern
    • state management

Typical steps:

  1. Install CopilotKit and follow its setup.
  2. Redefine tools/actions in the shape CopilotKit expects.
  3. Map CopilotKit’s messages and actions to your backend’s agent interface.
  4. Possibly refactor where tools are implemented (frontend vs backend) or how context is injected.

You can keep your core LLM logic, but you’re more likely to touch:

  • Tool definitions
  • Context passing
  • How the copilot drives UI behavior

The practical effect: even if you don’t fully “rewrite” your backend, you’re often reshaping both ends to fit CopilotKit’s integration model.


When Assistant‑UI is the easier choice

Assistant‑UI tends to be the easier migration path if:

  • You already have an LLM backend you like (Vercel AI SDK, LangChain, LangGraph, custom).
  • You want a drop‑in chat UI that:
    • handles streaming, retries, and interruptions
    • works across providers
    • doesn’t dictate your agent’s internal design
  • You don’t want to redo your agent logic, tools, or orchestration.
  • You care about performance and a minimal bundle size.
  • You want session persistence (via Assistant UI Cloud) without building your own thread store.

Because Assistant‑UI “just renders the chat interface and stores threads in Assistant UI Cloud so sessions persist across refreshes,” it stays firmly on the UI side of the boundary and lets you keep your backend untouched.


When CopilotKit might make sense despite the migration cost

You may still choose CopilotKit if:

  • You want the AI to drive complex front‑end workflows tightly coupled to your React state.
  • Your backend is minimal or flexible enough that refactoring it to align with CopilotKit isn’t a big concern.
  • You’re starting mostly from scratch, so you’re not trying to protect an existing agent architecture.

In those cases, the extra coupling to frontend state can be a feature, not a bug. But it’s less ideal if your priority is “do not rewrite the backend.”


Practical decision checklist

If your primary constraint is “no backend rewrite”, ask yourself:

  1. Do I already have an API that can handle messages and stream responses?
    • If yes → Assistant‑UI can typically plug in with minimal or no backend changes.
  2. Am I using (or planning to use) LangChain or LangGraph?
    • Assistant‑UI is already used with these and highlighted by their ecosystem.
  3. Do I want to keep my agent logic, tools, and memory implementation as‑is?
    • Assistant‑UI is UI‑focused and lets you keep all of that intact.
  4. Do I want the AI to deeply control frontend state and flows?
    • If yes, CopilotKit might be the better fit—but expect more integration work.

If your top priority is a fast migration path—especially for an existing in‑app assistant—Assistant‑UI usually offers the smoother route, because you can:

Build once, plug into your current backend, and be ready for production without re‑architecting your agent.


How to plan a low‑risk migration to Assistant‑UI

To minimize risk while you evaluate:

  1. Run Assistant‑UI alongside your current UI behind a feature flag.
  2. Point both UIs at the same backend endpoint.
  3. Compare behavior:
    • Streaming reliability
    • Latency and perceived performance
    • User experience (interruptions, retries, multi‑turn continuity)
  4. Gradually ramp traffic from old UI to Assistant‑UI.

Because Assistant‑UI does not require you to change your backend contracts, you can test it in production with almost no backend risk, then fully cut over when you’re satisfied.


In summary: for teams with an existing in‑app assistant and a backend they’re happy with, migrating to Assistant‑UI is typically low friction and backend‑preserving. CopilotKit can be powerful, but it often expects a tighter coupling between frontend and backend that makes migration more involved, especially if you don’t want to touch your existing agent implementation.