
How do I point my existing OpenAI SDK to Oxen.ai’s OpenAI-compatible API (https://hub.oxen.ai/api) and choose a model?
Quick Answer: Point your existing OpenAI SDK at
https://hub.oxen.ai/apiinstead of the default OpenAI base URL, set yourapiKeyto your Oxen API key, and then select one of Oxen’s supported models by its model name. Most OpenAI-compatible clients only need those two changes:baseURLandmodel.
Why This Matters
If you already have code running against the OpenAI API, you shouldn’t have to rewrite your whole stack just to try new models or control costs. Oxen.ai exposes an OpenAI-compatible API surface at https://hub.oxen.ai/api, so you can keep your existing SDKs, keep your existing prompts, and just swap out the base URL and model name. That lets you experiment with Oxen’s catalog of models, fine-tuned endpoints, and pay-as-you-go pricing without rebuilding your entire integration.
Key Benefits:
- Reuse your existing OpenAI SDKs: Keep the same client libraries and request patterns; just point them at
https://hub.oxen.ai/api. - Swap models without changing your app: Move between base models and your own fine-tuned models by changing the
modelfield. - Own your AI stack: Version datasets, fine-tune models, and deploy serverless endpoints in Oxen while your app keeps calling a familiar OpenAI-style API.
Core Concepts & Key Points
| Concept | Definition | Why it's important |
|---|---|---|
| OpenAI-compatible API | An HTTP API that follows OpenAI’s schema (endpoints, payload shapes, auth headers) so existing OpenAI SDKs work with minimal changes. | Lets you move your app to Oxen.ai by only changing baseURL and apiKey, not your entire codebase. |
Base URL (baseURL) | The root URL your SDK uses to send requests (e.g., https://hub.oxen.ai/api). | This is the primary switch that tells your existing OpenAI client to talk to Oxen instead of api.openai.com. |
Model name (model) | The string ID of the model or endpoint you want to call (e.g., gpt-4.1-mini or a custom fine-tuned model ID). | Choosing the right model controls cost, latency, and output quality; in Oxen this can include both catalog models and your own fine-tunes. |
How It Works (Step-by-Step)
At a high level, you: create an Oxen account, generate an API key, point your existing OpenAI SDK to https://hub.oxen.ai/api, and then pick a model from Oxen’s catalog or one you’ve fine-tuned.
1. Create your Oxen account and API key
- Go to https://www.oxen.ai/register and create a free account.
- Once logged in, navigate to your API or account settings.
- Generate an API key and copy it somewhere secure (treat it like a password).
You’ll pass this key as the apiKey in your OpenAI SDK configuration or via the Authorization: Bearer <OXEN_API_KEY> header if you’re calling the API directly.
2. Update your OpenAI client to use Oxen’s base URL
Almost all OpenAI SDKs let you override the base URL / host.
Node/TypeScript (Official OpenAI SDK)
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.OXEN_API_KEY, // your Oxen key
baseURL: "https://hub.oxen.ai/api", // Oxen’s OpenAI-compatible API
});
Python (Official OpenAI SDK)
from openai import OpenAI
client = OpenAI(
api_key=os.environ["OXEN_API_KEY"],
base_url="https://hub.oxen.ai/api",
)
cURL
curl https://hub.oxen.ai/api/chat/completions \
-H "Authorization: Bearer $OXEN_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1-mini",
"messages": [{"role": "user", "content": "Hello from Oxen!"}]
}'
If your language client doesn’t expose baseURL directly, look for base_url, api_base, or similar; the value should always be https://hub.oxen.ai/api.
3. Choose and set the model
Oxen’s API will accept the model field in your usual OpenAI payloads (/chat/completions, /completions, etc.). You have two main options:
- Use a catalog model – standard models exposed by Oxen (e.g., GPT-4–class text models, smaller “mini” models, vision models).
- Use your own fine-tuned model – custom endpoints you create in Oxen from versioned datasets.
A. Use a standard catalog model
Check the Oxen UI or docs for the current list of model IDs and pricing (e.g., “120 models. New models added every week.”). Then plug the ID into your usual calls:
Node:
const response = await client.chat.completions.create({
model: "gpt-4.1-mini", // example Oxen-supported model
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Summarize Oxen.ai in 3 bullet points." },
],
});
console.log(response.choices[0].message.content);
Python:
resp = client.chat.completions.create(
model="gpt-4.1-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain Oxen.ai in two sentences."},
],
)
print(resp.choices[0].message.content)
The payload shape stays the same; you just swap in a model string that Oxen supports.
B. Call your own fine-tuned model on Oxen
Oxen’s flow is:
- Upload and version your dataset in an Oxen repository.
- Fine-tune a model via the UI (zero-code) using that dataset.
- Deploy a serverless endpoint for the fine-tuned model in one click.
After deployment, you’ll get a model/endpoint identifier (e.g., ft:my-product-assistant-2026-03-01). Use that as the model in your existing OpenAI code:
const response = await client.chat.completions.create({
model: "ft:my-product-assistant-2026-03-01",
messages: [
{ role: "system", content: "You answer only with our product’s support policies." },
{ role: "user", content: "Can I use this feature offline?" },
],
});
Now you’re calling your own Oxen-hosted model through the same OpenAI SDK your app already uses.
4. Keep prompts and tools mostly unchanged
Because Oxen’s API is OpenAI-compatible:
- Chat messages (
role,content) work as-is. - Temperature, max_tokens, top_p, and similar parameters behave like you’d expect.
- Tools / function calling can usually be sent as you do today, as long as the underlying model supports it.
You get to keep your prompt engineering work intact while swapping the backend.
Common Mistakes to Avoid
-
Forgetting to change the base URL:
Make sure your SDK config or environment points athttps://hub.oxen.ai/api. If you leave it ashttps://api.openai.com/v1, you’re still hitting OpenAI, not Oxen. -
Using the wrong model ID:
Model strings are provider-specific. Don’t assumegpt-4orgpt-3.5-turbowill always exist with the same name. Check Oxen’s model catalog, then copy the exact model or fine-tune ID into yourmodelfield. -
Mixing API keys and hosts:
An Oxen API key won’t work againstapi.openai.com, and an OpenAI key won’t work againsthub.oxen.ai. Make sure your deployment environment setsOXEN_API_KEYonly for calls to Oxen. -
Ignoring dataset + model versioning:
When you fine-tune on Oxen, version the dataset you train on. That way you can always answer “which data trained which model?” and avoid silently regressing your model with unreviewed data.
Real-World Example
Imagine you already have a production Node service that:
- Uses the official OpenAI SDK.
- Calls
gpt-4.1-minifor product copy suggestions. - Is starting to hit both cost and latency limits.
You decide to try Oxen to:
- test alternative small models, and
- fine-tune one on your own marketing dataset.
You:
-
Create an Oxen account and generate an API key.
-
Add two environment variables to your service:
OXEN_API_KEYandOXEN_BASE_URL=https://hub.oxen.ai/api. -
Update your OpenAI client initialization to:
const client = new OpenAI({ apiKey: process.env.OXEN_API_KEY, baseURL: process.env.OXEN_BASE_URL, }); -
Swap your
modelfromgpt-4.1-minito an Oxen catalog model with a better price/latency profile. -
In parallel, you upload your labeled copy dataset into an Oxen repository, fine-tune a model in a few clicks, and deploy it to a serverless endpoint.
-
Once you’re happy with the fine-tune, you replace the catalog model ID with your deployed fine-tuned model ID. No other changes to your application code.
You’ve just moved from a generic OpenAI model to a custom, dataset-backed Oxen model while keeping the same SDK, the same request schema, and essentially the same code path.
Pro Tip: Store both the Oxen
modelID and the dataset commit/hash that trained it in your app’s config. That gives you an audit trail from request → model version → dataset version, which is critical when you’re debugging behavior or explaining a model decision to stakeholders.
Summary
To point your existing OpenAI SDK to Oxen.ai’s OpenAI-compatible API at https://hub.oxen.ai/api, you only need to: generate an Oxen API key, set baseURL to https://hub.oxen.ai/api, and choose a model ID from Oxen’s catalog or from your own fine-tuned deployments. Everything else—SDK usage, payload shapes, and prompt structure—can stay the same. That makes it easy to “Own Your AI”: version datasets, fine-tune models, and deploy serverless endpoints on Oxen without rebuilding your application stack.