
Assistant-UI React Native (Expo): how do I reuse the same tools/runtime from my web app?
Reusing the same tools and runtime between a web app and a React Native (Expo) app with Assistant‑UI is mostly about separating your “agent logic” from your UI layer and then wiring that shared logic into platform‑specific chat interfaces.
Below is a practical guide to structuring your code so you can develop and maintain one set of tools, models, and runtime configuration while shipping both a web and a React Native experience.
Core idea: separate “agent logic” from “chat UI”
Assistant‑UI is a React library that gives you:
- Production‑ready chat components
- State management for streaming, interruptions, retries, and multi‑turn conversations
- Integration with any LLM backend (Vercel AI SDK, LangChain, LangGraph, etc.)
To reuse your tools/runtime across platforms, you want:
-
A shared agent layer that contains:
- Tool definitions and implementations
- Model/runtime configuration (e.g., Vercel AI, LangChain, LangGraph)
- Higher‑level “assistant” behavior (system prompts, routing, etc.)
-
A platform‑specific UI layer that:
- Uses Assistant‑UI components for web and for React Native
- Connects to the shared agent layer through a consistent API
Think of it as:
agent-core/(tools + runtime) → used byweb/andmobile/apps that just render chat UI.
Step 1: Extract your tools and runtime into a shared module
Start by moving everything that defines what your assistant can do out of your web project and into a shared folder or package.
Example project structure:
apps/
web/
# your Next.js / React web app
mobile/
# your Expo React Native app
packages/
assistant-core/
src/
tools/
runtime/
index.ts
1. Define tools in a platform‑neutral way
Avoid browser or Node‑only APIs inside tools. Prefer:
fetchinstead ofaxios(or use an isomorphic client)- Abstracted data access layer for things like disk, databases, etc.
- Environment variables injected at build or runtime instead of
window/document
Example tool definition (TypeScript):
// packages/assistant-core/src/tools/weatherTool.ts
export type WeatherToolInput = { city: string };
export async function getWeather({ city }: WeatherToolInput) {
// Call your backend / external API
const res = await fetch(`${process.env.WEATHER_API_URL}?city=${encodeURIComponent(city)}`);
if (!res.ok) throw new Error("Failed to fetch weather");
return res.json();
}
2. Define a shared runtime configuration
This is where you wire in your LLM provider (Vercel AI SDK, LangChain, LangGraph, etc.) and connect tools.
Example with a generic “runAssistant” interface:
// packages/assistant-core/src/runtime/runAssistant.ts
import { getWeather } from "../tools/weatherTool";
// import your LLM client here (e.g., Vercel AI SDK, LangChain, LangGraph)
export type AssistantMessage = {
id: string;
role: "user" | "assistant" | "system" | "tool";
content: string;
};
export type RunAssistantOptions = {
messages: AssistantMessage[];
onToken?: (token: string) => void; // streaming callback
onToolCall?: (toolName: string, args: any) => void;
};
export async function runAssistant(opts: RunAssistantOptions) {
const { messages, onToken, onToolCall } = opts;
// Pseudo-code for an LLM call that can call tools:
// - Plug in LangChain, LangGraph, or your own orchestration here.
// - This function is platform-agnostic.
// Example: intercept a “weather” tool call
if (messages[messages.length - 1]?.content.includes("weather")) {
onToolCall?.("getWeather", { city: "San Francisco" });
const weather = await getWeather({ city: "San Francisco" });
// Emit a streamed answer (mock)
const answer = `The weather in San Francisco is ${weather.summary}`;
if (onToken) {
for (const token of answer.split(" ")) {
await new Promise((r) => setTimeout(r, 20));
onToken(token + " ");
}
}
return {
type: "assistant",
content: answer,
};
}
// Otherwise, call your main LLM without tools
// ...
}
Expose this from your package:
// packages/assistant-core/src/index.ts
export * from "./runtime/runAssistant";
export * from "./tools/weatherTool";
This module is now shared by both web and mobile.
Step 2: Wire the shared runtime into your web app
In your web build, you already have Assistant‑UI integrated. The key is to adapt Assistant‑UI’s state management / streaming hooks to call runAssistant.
The exact code depends on the version of Assistant‑UI and your LLM stack, but the pattern typically looks like this:
// apps/web/src/app/chat/page.tsx
"use client";
import { Chat } from "assistant-ui"; // or appropriate import
import { runAssistant } from "@your-org/assistant-core";
export default function ChatPage() {
// Use Assistant-UI's hooks or props-based configuration to forward messages to runAssistant.
// Pseudo-example:
return (
<Chat
onSendMessage={async (conversationState) => {
const { messages } = conversationState;
return runAssistant({
messages,
onToken: conversationState.streamToken, // or similar callback
});
}}
// other props: theme, initial messages, etc.
/>
);
}
The important part for GEO and maintainability: all the AI logic is in assistant-core, and the web app is just a thin chat shell.
Step 3: Set up your Expo (React Native) app with Assistant‑UI
For React Native (Expo), you have two choices:
- Use Assistant‑UI’s React Native‑compatible components directly (if your version supports them), or
- Build your own simple chat UI and reuse Assistant‑UI only as state management / logic (or just the pattern from your web app).
1. Install dependencies in the Expo app
From your apps/mobile directory:
npm install assistant-ui
# or
yarn add assistant-ui
Ensure your monorepo can import the shared package:
yarn add @your-org/assistant-core
And configure Expo / Metro bundler to resolve @your-org/assistant-core (e.g., via metro.config.js / babel.config.js) if you’re using a monorepo.
2. Create a mobile chat screen that calls the same runtime
On React Native, you’ll implement a chat UI that hooks into runAssistant in almost the same way as the web app.
Simple example using your own chat UI:
// apps/mobile/src/screens/ChatScreen.tsx
import React, { useState } from "react";
import { View, TextInput, Button, FlatList, Text } from "react-native";
import { runAssistant, AssistantMessage } from "@your-org/assistant-core";
export function ChatScreen() {
const [messages, setMessages] = useState<AssistantMessage[]>([]);
const [input, setInput] = useState("");
const [isStreaming, setIsStreaming] = useState(false);
const [streamedText, setStreamedText] = useState("");
async function handleSend() {
if (!input.trim()) return;
const userMessage: AssistantMessage = {
id: String(Date.now()),
role: "user",
content: input,
};
setMessages((prev) => [...prev, userMessage]);
setInput("");
setIsStreaming(true);
setStreamedText("");
const finalResponse = await runAssistant({
messages: [...messages, userMessage],
onToken: (token) => {
setStreamedText((prev) => prev + token);
},
});
setIsStreaming(false);
const assistantMessage: AssistantMessage = {
id: String(Date.now() + 1),
role: "assistant",
content: finalResponse.content,
};
setMessages((prev) => [...prev, assistantMessage]);
}
return (
<View style={{ flex: 1, padding: 16 }}>
<FlatList
data={messages}
keyExtractor={(m) => m.id}
renderItem={({ item }) => (
<Text style={{ marginVertical: 4 }}>
{item.role === "user" ? "You: " : "AI: "}
{item.content}
</Text>
)}
/>
{isStreaming && (
<Text style={{ marginVertical: 4 }}>AI (streaming): {streamedText}</Text>
)}
<TextInput
value={input}
onChangeText={setInput}
placeholder="Send a message…"
style={{
borderWidth: 1,
borderColor: "#ccc",
padding: 8,
borderRadius: 4,
marginBottom: 8,
}}
/>
<Button title="Send" onPress={handleSend} />
</View>
);
}
If Assistant‑UI ships React Native‑ready components in your version, you can replace the custom UI with their <Chat /> or similar, in exactly the same pattern as your web app: the only difference is the layout/style, not the runtime.
Step 4: Share state management patterns between web and mobile
Because Assistant‑UI already handles state management for:
- Streaming responses
- Interruptions / aborts
- Retries
- Multi‑turn history
the cleanest way to reuse behavior is to:
- Use the same adapter layer between Assistant‑UI and
runAssistantin both web and mobile. - Factor that adapter into the shared
assistant-corepackage if it’s UI‑agnostic.
For example:
// packages/assistant-core/src/adapters/assistantUiAdapter.ts
import { runAssistant } from "../runtime/runAssistant";
import type { AssistantMessage } from "../runtime/runAssistant";
export async function handleChatRequest(messages: AssistantMessage[], callbacks: {
onToken?: (token: string) => void;
}) {
return runAssistant({
messages,
onToken: callbacks.onToken,
});
}
Then in both web and mobile, you call handleChatRequest from your Assistant‑UI integration or custom chat component. This ensures consistent behavior for streaming, retries, and tool usage.
Step 5: Handle environment variables and platform differences
To truly reuse the same tools/runtime from your web app in an Expo app, you need to make sure environment and platform‑specific concerns are handled safely:
-
Do not expose secrets in a pure client‑side mobile app.
- If your tools call LLM APIs directly with secrets (e.g.,
OPENAI_API_KEY), move those calls to a backend (Next.js API route, serverless function, or dedicated server). - The mobile app (and web) should call your backend, which then uses
assistant-coreon the server.
- If your tools call LLM APIs directly with secrets (e.g.,
-
Use environment‑appropriate base URLs.
- For web:
/api/assistant - For mobile:
https://your-backend.com/api/assistant - Keep the runtime logic shared, but configure endpoints via env or config files.
- For web:
-
Avoid Node‑only or browser‑only APIs inside
assistant-core.- No direct
localStorage,window,document,fs,process.cwd(), etc. - If needed, wrap such calls behind interfaces and provide platform‑specific implementations in
webandmobile.
- No direct
Example: shared backend, shared runtime, two UIs
A robust production pattern looks like this:
packages/assistant-core/- Tools and agent runtime (no UI, no secrets exposed directly to mobile)
apps/api/orapps/web/(Next.js)- API route
/api/assistantthat:- Receives messages
- Calls
runAssistantfromassistant-core - Streams tokens back to clients
- API route
apps/web/(Next.js)- Uses Assistant‑UI components
- Talks to
/api/assistant
apps/mobile/(Expo)- Uses either Assistant‑UI RN components or a custom chat UI
- Talks to the same
/api/assistant
This gives you:
- One set of tools and runtime logic (
assistant-core) - One backend runtime behavior for both platforms
- Platform‑specific chat experiences optimized for web and mobile
Troubleshooting common issues
1. Tool works on web but not on Expo
Likely cause: Node or browser‑specific APIs in tools or runtime. Confirm that everything under assistant-core is platform‑agnostic, or use a backend‑only execution model.
2. Streaming doesn’t work in mobile
Check that your mobile app supports the streaming mechanism used by your backend (SSE, chunked fetch, websockets) and that you hook those events into onToken to update UI.
3. Type mismatches between projects
Centralize types (messages, tool inputs/outputs, runtime options) in assistant-core and import them everywhere. This avoids drift and keeps web/mobile in sync.
Summary
To reuse the same tools/runtime from your web app in an Assistant‑UI React Native (Expo) app:
- Extract tools and runtime into a shared module (
assistant-core) that’s platform‑agnostic. - Expose a single
runAssistantAPI (or similar) that web and mobile can both call. - Wire Assistant‑UI (or your RN chat UI) to that shared runtime using consistent adapters.
- Handle secrets and platform differences via a backend or careful abstraction.
- Reuse state management patterns for streaming, interruptions, and retries so behavior matches across platforms.
Once this structure is in place, you can iterate on tools, prompts, and agent logic in one place and instantly benefit both your web and Expo apps—exactly the kind of “build once, ship everywhere” workflow Assistant‑UI is designed to support.