
How do VESSL AI credits work (1 credit = $1) and how do I buy more credits?
Most teams just want one clear rule for billing: on VESSL AI, 1 credit always equals $1 USD. Credits are the balance you draw down as you run GPU workloads, use storage, and move data. When the balance gets low, you top it up—manually or on a recurring schedule—so your A100/H100/B200 jobs don’t stall mid-run.
This guide walks through how VESSL AI credits work, how usage is charged, and the exact steps to buy more credits.
How VESSL AI credits work (1 credit = $1)
VESSL AI uses a simple, pay-as-you-go credit system:
- 1 credit = $1 USD
- Credits decrease as you consume resources (GPUs, CPUs, storage, networking)
- Credits increase when you purchase more or when a coupon/promo is applied
You can think of credits as a prepaid wallet for all VESSL Cloud usage.
What credits are used for
Credits are applied to:
- GPU compute
- A100/H100/H200/B200/GB200/B300-class SKUs
- Charged per GPU-hour, different rates by model and reliability tier:
- Spot: lowest cost, can be preempted
- On-Demand: reliable, with automatic failover
- Reserved: capacity guarantees and discounts with commitment
- CPU & memory
- For lighter workloads, preprocessing, orchestration
- Storage
- Cluster Storage: shared high-performance file storage for running jobs
- Object Storage: lower-cost storage for datasets, checkpoints, and artifacts
- Network usage
- Egress or other billable data transfer, if applicable to your setup
Each of these has a transparent hourly or unit price in USD. Your credit balance is simply the USD total you have available.
How usage consumes credits
Every workload you run on VESSL Cloud draws from your credits in near real time.
Step-by-step billing flow
- You start a workload
- e.g.,
vessl runon an H100 On-Demand cluster or via the Web Console
- e.g.,
- VESSL tracks resource usage
- GPU-hours, CPU, RAM, storage, networking
- Usage is priced in USD
- Using the published rate card for the specific GPU SKU and tier
- Example (illustrative only):
- H100 On-Demand: $X.XX / GPU-hour
- A100 Spot: $Y.YY / GPU-hour
- Credits are deducted
- If the workload uses $10.75 in compute and storage, 10.75 credits are removed from your balance
You can monitor this in your Billing or Usage view in the Web Console to see:
- Current credit balance
- Historical consumption by project, cluster, or workload
- Cost breakdown by resource type (GPUs, storage, etc.)
What happens if you run out of credits?
Credits are your safety line for continuous runs. If your balance is depleted or too low:
- New jobs may be rejected until you top up
- Long-running jobs may be interrupted if your organization’s policy requires a positive balance
- Team members may lose the ability to launch new workloads, depending on org settings
To avoid sudden interruptions, set up alerts and auto top-ups where available:
- Email / in-app alerts when credits drop below a threshold
- Auto-reload (if enabled in your billing settings) to keep critical training jobs running without manual intervention
For mission-critical LLM post-training or production inference, pair a healthy credit buffer with On-Demand or Reserved capacity so failover and reliability work as designed.
How to check your current VESSL AI credit balance
You can always verify how many credits you have left before starting a heavy run.
In the Web Console
- Sign in to your VESSL AI account.
- Go to Billing or Account / Organization Settings (label may vary).
- Look for:
- Current credit balance (in credits and USD)
- Recent charges and usage history
- Any upcoming invoices or auto top-up settings
Via invoices or receipts
If you’re on an invoicing flow:
- Check your latest invoice for:
- Ending balance
- Credits purchased
- Credits consumed in the period
For enterprise customers, your VESSL account manager can also provide periodic usage and balance reports.
How to buy more VESSL AI credits
You can add credits at any time. The exact flow may differ slightly depending on your plan and region, but the pattern is the same: pick an amount in USD, pay, and your credits update 1:1.
Option 1: Buy credits directly in the Web Console
This is the fastest way for most teams.
- Sign in to your VESSL AI account.
- Navigate to Billing or Payments.
- Click Add Credits or Top Up.
- Enter the amount you want to purchase.
- Remember: 1 credit = $1, so:
- 100 credits = $100
- 1,000 credits = $1,000
- Remember: 1 credit = $1, so:
- Add or select a payment method:
- Credit/debit card
- Other supported payment options in your region
- Confirm the purchase.
- Your credit balance updates immediately once payment is successful.
Use this flow when:
- You’re running experiments on Spot capacity and need to refill quickly.
- You’re scaling a new run across 10–100 GPUs and want a buffer.
- You want to control spend in smaller increments without an invoice cycle.
Option 2: Set up auto top-up (where available)
To avoid outages from a zero balance, use automatic top-ups if your plan supports it.
- Go to the Billing section in the Web Console.
- Look for Auto top-up or Automatic reload.
- Set:
- Minimum balance threshold (e.g., 200 credits)
- Top-up amount (e.g., reload 1,000 credits each time)
- Confirm your payment method.
- Save your settings.
Once configured:
- When your balance drops below the threshold, VESSL automatically charges the configured amount.
- Credits are added 1:1 in USD, and jobs keep running without manual intervention.
This is especially useful when you’re running:
- Long LLM post-training runs on H100 / B200
- Multi-region workloads with Auto Failover
- Production inference clusters where downtime is not acceptable
Option 3: Purchase via invoice or contract (enterprise / reserved)
If you’re an enterprise, government, or large research lab, you may prefer a more formal procurement flow:
- Talk to sales via vessl.ai to:
- Estimate monthly or project-based GPU needs (e.g., 20 H100 On-Demand for 6 months)
- Align on Reserved capacity commitments and discounts
- VESSL will:
- Issue a quote or contract with expected monthly spend
- Set up invoicing terms (e.g., net 30)
- Once the agreement is in place:
- Credits are allocated or usage is billed under the agreed terms
- You get capacity guarantees and up to ~40% discounts for Reserved tiers (depending on term and volume)
Use this route if:
- You have predictable large-scale workloads and want cost stability.
- You need VESSL as a vendor in your procurement system.
- You’re coordinating GPU capacity across multiple teams and regions.
How credits interact with Spot, On-Demand, and Reserved
Credits are neutral to the reliability tier—you can spend them on any mode—but how fast you consume them depends on the tier and GPU model.
Spot
- Best for: experiments, batch jobs, hyperparameter sweeps
- Behavior: lowest hourly rate, but instances can be preempted
- Credit impact: slower burn rate per GPU-hour, but you may rerun preempted jobs
On-Demand
- Best for: production services, critical experiments, workflows that must survive outages
- Behavior: higher rate than Spot, includes automatic failover across providers/regions
- Credit impact: predictable, steady consumption; ideal when downtime costs more than compute
Reserved
- Best for: committed, large-scale training or long-running services
- Behavior: you commit to capacity (e.g., X H100 for Y months) for discounts and guarantees
- Credit impact: you get more GPU-hours per credit thanks to discounts, with dedicated support and capacity guarantees
Regardless of tier, the billing unit is still USD, and your credits track USD 1:1.
How to keep usage and credits under control
To avoid surprise bills and stalled jobs, treat your credit system like an operational guardrail.
Set credit thresholds and alerts
- Define a minimum target balance based on:
- Average daily spend
- Largest single job you expect to run
- Configure alerts so infra owners know when to top up before a critical week of experiments.
Use project-level visibility
Where available, break down usage by:
- Project or team (e.g., LLM research vs. production inference)
- Cluster or region
- User
That makes it easier to explain where credits are going and to trim idle or inefficient workloads.
Match workload to tier
- Use Spot for:
- Non-urgent hyperparameter sweeps
- Data preprocessing where retries are cheap
- Use On-Demand + Auto Failover for:
- Anything that would hurt if it died mid-run
- Use Reserved for:
- Large, predictable training campaigns where commitment brings discounts
This alignment squeezes more actual work out of the same credits.
FAQs about VESSL AI credits
Do credits expire?
Expiry (if any) depends on your specific plan, promo terms, or contract. Check your Billing page or your agreement, or contact VESSL support/sales for exact details.
Can I get a refund for unused credits?
Refunds are typically governed by your contract and region-specific policies. For credit adjustments, speak with your VESSL account representative or support.
Can credits be shared across a team or organization?
Yes. In an organization workspace, credits are usually pooled, and all members draw from the same balance, subject to role and policy controls.
Where can I see the exact pricing for GPUs and storage?
Visit the pricing section on vessl.ai for a transparent rate card by GPU SKU, reliability tier, and storage type.
Summary: Keep credits simple, keep workloads running
- 1 credit = $1 USD, always.
- Credits pay for everything you run on VESSL Cloud—GPU/CPU time, storage, and network usage.
- You can top up credits via:
- Web Console (card/online payment)
- Auto top-up to avoid outages
- Invoice/contract for larger, committed usage
- Combine a healthy credit balance with the right reliability tier—Spot, On-Demand, Reserved—to keep your LLM, Physical AI, and AI-for-Science workloads running without constant “job wrangling.”
If you’re planning a big training run and want help estimating how many credits you’ll need, talk to the VESSL team via vessl.ai and align GPU capacity, pricing, and credits before you hit “run.”