
Oxen.ai plan limits: what happens if I exceed storage/transfer on Explorer vs Hacker vs Pro?
Most teams only think about plan limits the moment a push fails or an upload hangs at 97%. With Oxen.ai, those limits are intentional—clear storage and transfer caps per plan so you can control cost while still versioning every dataset and model weight that matters.
Quick Answer: If you exceed Oxen.ai storage or transfer limits on Explorer, Hacker, or Pro, you won’t get surprise charges or silent throttling. You’ll hit soft limits first (warnings and UI/API signals), then hard stops on additional storage or transfer for that cycle until you upgrade, clean up old assets, or wait for your usage window to reset.
Why This Matters
Plan limits define how much dataset and model weight you can actually version, share, and ship. If you’re pushing multi-GB checkpoints, uploading millions of images, or hammering inference endpoints, hitting those limits can stall real work: blocked push operations, failed uploads, and flaky fine-tunes. Understanding what happens before and after you cross a limit helps you:
- Plan migrations off ad-hoc S3 without mid-migration surprises.
- Design realistic pipelines around dataset growth and retraining cadence.
- Avoid downtime for critical fine-tuning or inference endpoints.
Key Benefits:
- Predictable behavior: Clear warnings and hard stops instead of slow, mysterious throttling when you hit limits.
- Cost control: No hidden overage billing—if you need more, you explicitly upgrade or clean up.
- Plan alignment: Match Explorer vs Hacker vs Pro to your actual dataset + transfer profile, not just guesswork.
Core Concepts & Key Points
| Concept | Definition | Why it's important |
|---|---|---|
| Storage limit | The total GB/TB of data (datasets, model weights, artifacts) you can store across your Oxen.ai repositories on a given plan. | Directly caps how many versions of large datasets and weights you can keep live without cleanup or upgrades. |
| Data transfer limit | The total GB/TB of data you can move in and out of Oxen.ai (uploads, downloads, cloning, endpoint traffic) within a billing window. | Determines how frequently you can push new versions, sync with collaborators, and serve inference without hitting hard walls. |
| Soft vs hard limits | Soft: warning states where things still work but you’re near the cap. Hard: operations that exceed the cap are blocked or require upgrading. | Explains why you might see warnings in the UI/CLI before anything actually fails, and what triggers real disruptions. |
How Plan Limits Work (Step-by-Step)
The specifics (exact GB/TB per plan) are listed on the Oxen.ai pricing page and may change over time, but the behavior pattern is stable across Explorer, Hacker, and Pro.
1. Track storage and transfer per plan
-
Measure storage per repository and account:
- Every dataset, model weight file, and artifact you push to Oxen.ai counts against your plan’s storage limit.
- Storage is measured as current live data, not every historical byte ever pushed; Oxen uses content-addressing and dedup to avoid counting identical blobs multiple times.
-
Measure transfer over a rolling/billing window:
- Uploads (e.g.,
oxen push, UI upload) and downloads (cloning, pulling, bulk exports) count toward your transfer limit. - Inference endpoints and model API usage also consume transfer; every token/image/second processed moves data.
- Uploads (e.g.,
-
Surface usage in the UI and CLI:
- Your workspace/account dashboard shows current storage and transfer usage relative to your plan caps.
- CLI and API responses may include warnings when you’re close to limits.
2. Hit soft limits: warnings and nudges
-
Cross a usage threshold (e.g., 80–90%)
- Explorer, Hacker, and Pro all show warnings when you’re getting close, so you’re not blindsided by a failed push.
- You’ll typically see banners or meters in the UI (“You’ve used 85% of your storage”) and possibly CLI hints on push/pull.
-
Everything still works—for now
- You can continue to upload data, push commits, fine-tune models, and call endpoints.
- This is your window to either:
- Clean up large, unused artifacts.
- Move some historical data to cold storage elsewhere.
- Upgrade from Explorer → Hacker or Hacker → Pro if you know you’ll keep growing.
3. Hit hard limits: what actually breaks
Once you cross the hard limit, behavior diverges slightly by plan, but the pattern is:
-
New storage is blocked
- When you try to exceed your storage cap, additional uploads are rejected.
- You may see errors like “storage quota exceeded” from the CLI/API and failed uploads in the UI.
- Existing data stays accessible and downloadable—you don’t lose anything you’ve already stored.
-
New transfer that exceeds the cap is blocked or delayed
- Once your transfer cap is reached for the window, operations that move more data may:
- Fail outright with a quota error.
- Be soft-throttled depending on the operation and plan.
- Reading small objects or metadata may still work; bulk transfers (large pulls, heavy endpoint usage) are what usually get blocked.
- Once your transfer cap is reached for the window, operations that move more data may:
-
Fine-tunes and endpoints depend on both
- Fine-tuning jobs: If a new fine-tune would require pushing more training data or storing new model weights that exceed your storage cap, that job can’t start until you free space or upgrade.
- Inference endpoints: You can keep running endpoints as long as:
- The model weights are already stored within your cap.
- You haven’t exhausted your transfer allocation for that window.
- Once transfer is exhausted, heavy inference traffic may be blocked or limited.
Explorer vs Hacker vs Pro: What Happens When You Exceed Limits?
The names map roughly to usage level and control:
Explorer: explore datasets, hit the ceiling early
Explorer is tuned for individuals, prototypes, and small experiments.
-
Storage behavior when exceeded:
- You’ll hit hard limits sooner—Explorer has the most constrained storage.
- When you exceed storage, new uploads fail; you can still:
- Browse existing repositories.
- Download current data.
- Run inference within your transfer budget.
-
Transfer behavior when exceeded:
- Heavy cloning or bulk downloads will be blocked once you hit the cap.
- Occasional small pulls and UI previews may still work if they don’t materially exceed the limit, but assume anything large will fail.
-
Typical Explorer failure mode:
- “We uploaded a few large model checkpoints and a couple of datasets, and now our
pushis failing.” - Solution: prune big unused artifacts or upgrade to Hacker.
- “We uploaded a few large model checkpoints and a couple of datasets, and now our
Hacker: active builders, more headroom, same rules
Hacker is for builders who are shipping real things: larger datasets, frequent fine-tunes, heavier inference.
-
Storage behavior when exceeded:
- Higher cap than Explorer, so you can comfortably version multiple datasets and weight variants.
- On exceeding storage:
- New datasets or model weights won’t upload.
- You still have full read access to existing repos.
- This is where it’s worth adopting a light data hygiene regimen: delete stale experiment artifacts, keep canonical versions.
-
Transfer behavior when exceeded:
- Heavy CI pipelines that constantly re-pull large repos or continuous evaluation jobs are typically what burn through transfer.
- Once over the cap, CI pulls or bulk exports fail until the window resets or you upgrade to Pro.
-
Typical Hacker failure mode:
- “Our nightly training job that clones the dataset started failing half-way through the month.”
- Solution: optimize pipelines (use shallow pulls, cache clones) or move to Pro if this is expected usage.
Pro: teams in production, more scale and enforcement
Pro is designed for teams pushing real production workloads and collaborating across ML, data, product, and creative.
-
Storage behavior when exceeded:
- Significantly higher storage, plus the expectation you’ll version multiple large multimodal datasets and many model checkpoints.
- When you hit Pro storage limits:
- New pushes that take you over the cap fail.
- Existing production endpoints continue to run as long as their models are already stored.
- Teams typically respond by either upgrading within Pro tiers, purchasing add-on storage, or archiving older data.
-
Transfer behavior when exceeded:
- Transfer includes:
- Team-wide dataset syncs.
- Automated evaluations and monitoring that pull data.
- Inference traffic against your deployed endpoints.
- Once over the cap for the window:
- Non-critical operations (bulk exports, large ad-hoc pulls) are blocked.
- You may need to throttle or scale back low-priority jobs until your window resets or you increase transfer capacity.
- Transfer includes:
-
Typical Pro failure mode:
- “We added more evaluation + monitoring that reads from Oxen and suddenly we’re hitting transfer caps earlier.”
- Solution: adjust job frequency, add caching, or negotiate higher transfer limits in your Pro plan.
The exact GB/TB caps and overage options live on the pricing page and may evolve; the behavior pattern—warnings, then hard stops for new usage—remains consistent.
Common Mistakes to Avoid
-
Ignoring usage dashboards until something breaks:
Check storage and transfer periodically, especially before large migrations, mass fine-tunes, or launching public-facing endpoints. -
Treating Oxen as a dumping ground for every raw artifact:
Curate what you store. Keep canonical datasets and useful intermediate artifacts; move truly cold or redundant data elsewhere so you don’t waste storage on unused blobs.
Real-World Example
You’re on the Hacker plan, running a multimodal GEO content pipeline:
- Week 1: you upload a 200 GB image dataset and fine-tune a vision model.
- Week 2: you add another 150 GB of text data, fine-tune an LLM, and spin up an endpoint.
- Week 3: product requests a new evaluation set; you upload another 100 GB and hit 95% of your storage limit. The UI warns you.
- Week 4: you try to upload 50 GB more for a new experiment. The upload fails with a quota exceeded error. Existing endpoints and datasets remain accessible, but no new large artifacts can be stored.
Your options:
- Delete or archive old experiment artifacts you’ll never reuse.
- Move some non-critical data off Oxen to free space.
- Upgrade to Pro to bump your storage and transfer ceilings.
Pro Tip: Treat Oxen usage like code dependencies—review it regularly. Set a recurring check (e.g., once per sprint) to scan which repositories are taking the most space and whether those artifacts are still tied to active models or experiments.
Summary
Oxen.ai’s Explorer, Hacker, and Pro plans each come with clear storage and transfer limits. As you approach those limits, you get warnings; when you exceed them, new uploads and large transfers are blocked, but your existing datasets and models remain intact. Explorer is best for small experiments with tight ceilings, Hacker for active builders with more headroom, and Pro for teams running real production workloads. Plan around those limits, watch your dashboards, and keep your dataset and model artifact hygiene tight so your training and deployment loop doesn’t stall.