How do I install the Oxen.ai CLI (Homebrew/Linux/Windows) and log in with an API key?
AI Data Version Control

How do I install the Oxen.ai CLI (Homebrew/Linux/Windows) and log in with an API key?

8 min read

Quick Answer: To install the Oxen.ai CLI, use Homebrew on macOS, a package or binary install on Linux, or a standalone executable on Windows—then log in once with an API key to connect your local machine to your Oxen account. After that, you can version datasets, push model weights, and trigger fine-tunes directly from your terminal.

Most of the real work in AI happens on your machine or in your CI—not in a browser. The Oxen.ai CLI is what links your local data, models, and workflows to your Oxen repositories, fine-tuning jobs, and inference endpoints. If you want reproducible datasets, traceable model weights, and a clean path from “folder of files” to “production endpoint,” you need the CLI set up and authenticated with an API key.

Key Benefits:

  • Version every asset from your terminal: Track datasets, model weights, and other large artifacts using Git-like commands without fighting S3 sync scripts.
  • Trigger fine-tunes and deployments faster: Move from local dataset → Oxen repo → fine-tuned model → serverless endpoint without wiring up custom infra.
  • Standardize workflows across environments: Use the same Oxen.ai CLI commands on macOS, Linux, and Windows, so your laptop, workstation, and CI all behave the same way.

Core Concepts & Key Points

ConceptDefinitionWhy it's important
Oxen.ai CLIA command-line tool to interact with Oxen repositories, datasets, models, and endpoints.Lets you version, push, pull, and manage assets programmatically, not just through the UI.
API Key LoginAuthentication method where you log in once with a personal API token instead of browser-based auth.Ideal for headless environments (servers, CI) and repeatable scripts—no “click to log in” step required.
Cross-OS Install (Homebrew/Linux/Windows)Different installation flows tailored to each OS: Homebrew for macOS, package/binary for Linux, and standalone binary for Windows.Ensures your whole team can use the same Oxen workflow regardless of their local OS.

How It Works (Step-by-Step)

At a high level, you:

  1. Install the CLI for your OS (Homebrew on macOS, package/binary on Linux, executable on Windows).
  2. Configure your environment (ensure oxen is on your PATH and verify the version).
  3. Log in with an API key once so your CLI has permission to access your Oxen.ai account.

Below, I’ll walk through concrete flows for each platform and then show how to authenticate with an API key.


Install the Oxen.ai CLI on macOS with Homebrew

On macOS, Homebrew is the cleanest way to install and update the Oxen.ai CLI.

1. Verify Homebrew is installed

In your terminal:

brew --version

If this errors, install Homebrew first from https://brew.sh.

2. Add the Oxen.ai tap (if required)

If Oxen maintains a dedicated tap, you’ll add it once:

brew tap oxen-ai/oxen

If the CLI is published directly to Homebrew core, you can skip this step; your docs or brew search oxen will confirm:

brew search oxen

3. Install the CLI

brew install oxen

If your environment uses the tap:

brew install oxen-ai/oxen/oxen

4. Confirm the install

oxen --version
oxen --help

If oxen isn’t found, ensure Homebrew’s bin directory is on your PATH, e.g.:

echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

(For Intel Macs this may be /usr/local/bin.)


Install the Oxen.ai CLI on Linux

On Linux, you typically either use a package manager or install from a binary release.

1. Install via package manager (Debian/Ubuntu-style)

If Oxen publishes a .deb:

# Download the latest .deb
wget https://download.oxen.ai/cli/oxen_latest_amd64.deb

# Install the package
sudo dpkg -i oxen_latest_amd64.deb

# Fix any missing dependencies
sudo apt-get -f install

Check:

oxen --version

2. Install via binary (any distro)

If you’re on a different distro (e.g., Fedora, Arch) or prefer a bare binary:

# Replace with the actual latest URL for your architecture
wget https://download.oxen.ai/cli/oxen-linux-amd64 -O oxen
chmod +x oxen
sudo mv oxen /usr/local/bin/

Now verify:

which oxen
oxen --version

If which oxen returns nothing, ensure /usr/local/bin is on your PATH (e.g., add to ~/.bashrc or ~/.zshrc).


Install the Oxen.ai CLI on Windows

On Windows, you’ll typically download an .exe or use a package manager like winget or choco if supported.

1. Install via installer or standalone binary

From a PowerShell prompt:

  1. Go to the Oxen.ai downloads page.
  2. Download oxen-windows-amd64.exe (or similar).
  3. Either:
    • Place it in a folder already on your PATH (e.g., C:\Users\<you>\AppData\Local\Microsoft\WindowsApps), and rename it to oxen.exe, or
    • Create a folder like C:\Tools\Oxen\, move oxen.exe there, and add that folder to your system PATH:

In PowerShell (run as Administrator):

[Environment]::SetEnvironmentVariable(
  "Path",
  $env:Path + ";C:\Tools\Oxen",
  [System.EnvironmentVariableTarget]::Machine
)

Close and reopen your terminal, then run:

oxen --version

2. Install via winget or Chocolatey (if available)

If Oxen is published to winget:

winget search oxen
winget install OxenLabs.OxenCLI

Or via Chocolatey:

choco install oxen-cli

Again, verify:

oxen --help

Log In to the Oxen.ai CLI with an API Key

Installation is only half the story—your CLI needs to authenticate back to Oxen so it can push datasets, model weights, and interact with your repos.

1. Generate an API key in Oxen.ai

  1. Log in to https://www.oxen.ai using your email/password, magic link, or GitHub.
  2. Open your account or profile settings.
  3. Find the API Keys or Developer section.
  4. Click Create New API Key.
  5. Give it a descriptive name like laptop-maya or ci-github-actions.
  6. Copy the generated key immediately and store it in a secure place (password manager, secrets manager).

Treat your API key like a password—anyone who has it can act as you.

2. Log in via the CLI using the API key

From any terminal where oxen is installed:

oxen login --api-key YOUR_API_KEY_HERE

If the CLI supports an interactive prompt, you can also run:

oxen login

Then paste the API key when prompted.

The CLI will typically write a config file to your home directory (e.g., ~/.oxen/config on Unix-like systems or %USERPROFILE%\.oxen\config on Windows) so you don’t have to re-enter the key every time.

3. Verify authentication

Run a command that hits your account, for example:

oxen whoami

or list your repositories:

oxen repo list

If you see your username and/or your repo list, your API key login is working.


Configure Environment Variables for CI and Servers

In headless environments (CI pipelines, remote servers), you usually don’t want to bake API keys into images or scripts. Instead, inject them via environment variables and log in non-interactively.

A common pattern:

export OXEN_API_KEY=YOUR_API_KEY_HERE
oxen login --api-key "$OXEN_API_KEY"

In GitHub Actions:

- name: Log in to Oxen
  run: oxen login --api-key "${{ secrets.OXEN_API_KEY }}"

This way you can:

  • Version datasets from CI after preprocessing.
  • Push model weights after training jobs complete.
  • Trigger fine-tunes and keep everything traceable back to specific commits.

Common Mistakes to Avoid

  • Storing the API key in source control:
    Never commit your API key to Git. Use environment secrets in CI systems and a password manager locally.

  • Forgetting to put the CLI on your PATH:
    If oxen isn’t found, you didn’t add the install directory to your PATH. Fix that before debugging anything else.

  • Mixing multiple accounts without checking whoami:
    On shared machines or remote servers, always run oxen whoami before pushing datasets or models so you don’t write to the wrong account or org.

  • Using the same API key everywhere:
    Generate separate keys per device or environment. That way if a key leaks from CI, you can revoke just that one.


Real-World Example

Say you’re curating a training dataset for a text + image model on a macOS laptop, but your training runs on a Linux box in the cloud and your product team uses Windows machines to inspect samples.

You:

  1. Install and log in on macOS with Homebrew:

    brew install oxen
    oxen login --api-key <LAPTOP_KEY>
    

    You version the dataset locally:

    oxen repo init my-multimodal-dataset
    oxen add images/ labels.jsonl
    oxen commit -m "Initial dataset for v1 model"
    oxen push
    
  2. Install and log in on Linux:

    wget https://download.oxen.ai/cli/oxen-linux-amd64 -O oxen
    chmod +x oxen && sudo mv oxen /usr/local/bin/
    oxen login --api-key <TRAINING_SERVER_KEY>
    oxen clone my-multimodal-dataset
    

    Your training job reads directly from that repo, logs fine-tune metadata, and uploads model weights back to an Oxen repository.

  3. Install and log in on Windows for data review:

    • Download oxen.exe, add to PATH.
    • Run oxen login --api-key <WINDOWS_KEY>.

    Product and creative teammates can now clone the dataset, browse samples, and propose changes—all tied back to specific commits. Everyone’s using different OSes, but the workflow is identical and traceable: dataset → fine-tune → endpoint.

Pro Tip: Create separate API keys labeled by environment (laptop, linux-trainer-01, ci-prod) and periodically audit or rotate them. If one machine is compromised, you can revoke just that key instead of shutting down your entire pipeline.


Summary

Installing the Oxen.ai CLI and logging in with an API key is the unlock that turns Oxen from “a UI you visit sometimes” into “part of your day-to-day workflow.” Whether you’re on macOS with Homebrew, a Linux server with a binary install, or a Windows laptop with an .exe, the pattern is the same: install oxen, put it on your PATH, generate an API key, and log in once. From there you can version every dataset, track every model weight, and keep a clean audit trail from “which data?” to “which model?” to “which endpoint?”.

Next Step

Get Started