How do I set up Assistant Cloud with Assistant-UI so conversations persist and users can resume threads?
AI Chat UI Toolkits

How do I set up Assistant Cloud with Assistant-UI so conversations persist and users can resume threads?

6 min read

Most teams implementing AI chat quickly realize they need more than an in-memory UI. Users expect conversations to persist across page refreshes, devices, and sessions—and to be able to resume old threads at any time. Assistant Cloud is designed to solve exactly this for Assistant-UI, giving you durable conversation storage with minimal setup.

In this guide, you’ll learn how to set up Assistant Cloud with Assistant-UI so conversations persist automatically and users can resume threads seamlessly.


Why use Assistant Cloud with Assistant-UI?

Assistant-UI gives you a polished, ChatGPT-style UX with:

  • Production-ready chat components
  • Streaming and interruptions
  • Multi-turn state management

Assistant Cloud adds:

  • Persistent storage for threads and messages
  • Sessions that survive refreshes and sign-ins
  • A unified place to load and resume conversations later

Together, they give you a complete, production-ready chat experience that behaves like modern AI products users already know.


Prerequisites

Before wiring up Assistant Cloud with Assistant-UI, you should have:

  • A React app (Next.js, Vite, Create React App, etc.)
  • Assistant-UI installed
  • An LLM backend (Vercel AI SDK, LangChain, LangGraph, or any LLM provider)

From the docs context:

“Conversations and streaming AI output are powered by @assistantui. It renders the chat interface and stores threads in Assistant UI Cloud so sessions persist across refreshes.”

This means most of the heavy lifting is done for you—your main job is to initialize and connect the cloud chat state.


Step 1: Initialize Assistant-UI in your project

If you haven’t already, scaffold Assistant-UI into your app:

npx assistant-ui init

This sets up:

  • Core chat components
  • Default state management
  • Basic configuration boilerplate

Once installed, you’ll typically have something like a <Chat> or <AssistantUI> component that you render in your React tree.


Step 2: Enable cloud-based chat state

Assistant-UI provides a cloud-aware state layer (commonly exposed as something like useCloudChat or a similar hook/provider) that transparently talks to Assistant Cloud.

At a high level, you’ll:

  1. Wrap your chat UI with a provider that knows how to talk to Assistant Cloud
  2. Use the cloud-aware hook to manage messages and threads

A typical structure looks like:

import { CloudChatProvider } from "@assistantui/react-cloud";
import { Chat } from "@assistantui/react";

function App() {
  return (
    <CloudChatProvider
      apiKey={process.env.NEXT_PUBLIC_ASSISTANT_UI_KEY}
      // or however you provide auth/config
    >
      <Chat />
    </CloudChatProvider>
  );
}

Under the hood, this provider:

  • Connects to Assistant Cloud
  • Stores threads and messages remotely
  • Rehydrates state when the user reopens the app

Your exact import names may differ depending on the version, but the pattern is always:

Wrap chat in a “cloud” provider → use that state in your UI.


Step 3: Wire the chat state to your LLM backend

Assistant-UI doesn’t lock you into a specific LLM. It works with:

  • Vercel AI SDK
  • LangChain
  • LangGraph
  • Any other LLM provider

You connect your backend by passing a handler that turns user messages into AI responses, while Assistant-UI + Assistant Cloud handle:

  • Storing the conversation
  • Maintaining thread IDs
  • Streaming responses into the UI

A conceptual example:

import { useCloudChat } from "@assistantui/react-cloud";
import { Chat } from "@assistantui/react";
import { streamLLMResponse } from "./llm-backend"; // your implementation

function CloudBackedChat() {
  const chat = useCloudChat({
    onSendMessage: async (message, thread) => {
      // `thread` is persisted by Assistant Cloud
      return streamLLMResponse({ message, threadId: thread.id });
    },
  });

  return <Chat chat={chat} />;
}

Key points:

  • useCloudChat manages persistent threads
  • Your onSendMessage just focuses on AI logic
  • Assistant Cloud ensures that thread context is preserved across sessions

Step 4: Persist conversations across refreshes

Once the cloud provider/hook is set up, conversations automatically persist:

  • When a user sends messages, they’re stored in Assistant Cloud
  • If the user refreshes, Assistant-UI re-fetches their active thread
  • Streaming output and retries are all handled via the same state layer

You don’t need to manually:

  • Save chat logs to local storage
  • Manage IDs or thread keys in URL params
  • Rebuild context on every mount

The Assistant UI Cloud integration is explicitly designed so “sessions persist across refreshes and context builds over time.”


Step 5: Let users resume threads

To let users resume existing threads (e.g., a “Recent chats” sidebar), you’ll typically:

  1. Query Assistant Cloud for the user’s threads
  2. Render them in a list
  3. Load the selected thread into the chat state

Conceptual example:

import { useCloudChat, useThreadList } from "@assistantui/react-cloud";
import { Chat } from "@assistantui/react";

function ChatWithThreadList() {
  const chat = useCloudChat();
  const { threads, isLoading } = useThreadList(); // fetched from Assistant Cloud

  if (isLoading) return <div>Loading conversations…</div>;

  return (
    <div className="chat-layout">
      <aside className="thread-list">
        {threads.map((thread) => (
          <button
            key={thread.id}
            onClick={() => chat.loadThread(thread.id)}
          >
            {thread.title ?? "Untitled conversation"}
          </button>
        ))}
      </aside>

      <main className="chat-panel">
        <Chat chat={chat} />
      </main>
    </div>
  );
}

With this pattern, users can:

  • Pick up an old conversation
  • Switch between different threads
  • Continue chatting with full historical context preserved by Assistant Cloud

Step 6: Handle authentication and user isolation

To make sure conversations persist per user (and not globally):

  • Use your existing auth system (e.g., NextAuth, JWT, custom auth)
  • Tie the Assistant Cloud session to the authenticated user
  • Ensure each user only sees their own threads

Implementation details vary, but best practices include:

  • Passing a user identifier (sub, userId, etc.) to your Assistant Cloud config
  • Ensuring your backend that proxies Assistant Cloud APIs enforces auth
  • Using secure server-side environment variables for any sensitive keys

This guarantees:

  • Conversations follow the logged-in user across devices
  • Threads remain isolated between users

Step 7: Customize the chat UX without breaking persistence

Assistant-UI is a React toolkit, so you can fully customize the UX while still leveraging Assistant Cloud for persistence:

  • Theming with Tailwind/Shadcn or your own design system
  • Custom message bubbles, avatars, and system messages
  • Tool invocation messages (“type @ to mention a tool”)
  • Attachments and file uploads

Because persistence is handled by the state layer + cloud, styling changes won’t affect:

  • How threads are stored
  • How messages are replayed
  • Whether sessions survive refreshes

You’re free to design a UI that matches your brand while keeping robust thread persistence.


Step 8: Production considerations

When you’re preparing to ship:

  • Performance: Assistant-UI is optimized for streaming and minimal bundle size, so it remains responsive even with long threads.
  • Error handling: Use built-in support for retries and interruptions so your UI degrades gracefully when the LLM or network fails.
  • Observability: If you use LangSmith/LangGraph, integrate your LLM calls there while Assistant Cloud handles chat persistence.

Combined, you get:

  • Production-ready UI
  • Persistent, reliable conversations
  • Strong observability for debugging and improving prompts

Summary: Persistent conversations with minimal setup

To set up Assistant Cloud with Assistant-UI so conversations persist and users can resume threads:

  1. Initialize Assistant-UI in your React app (npx assistant-ui init).
  2. Wrap your chat UI in the Assistant Cloud provider (CloudChatProvider or equivalent).
  3. Use a cloud-aware chat hook (like useCloudChat) to manage state.
  4. Connect your LLM backend via a simple send/stream handler.
  5. Fetch and render a thread list so users can resume previous conversations.
  6. Tie everything to your auth system so sessions follow users across devices.

Once this is in place, your app will behave like a modern AI product: conversations stored in the cloud, surviving refreshes, and ready to resume at any time.