Skip to main content
A guide for users moving GPU workloads from Runpod to Vast.ai. Whether you use Runpod Pods for training and development or Runpod Serverless for inference, this guide maps every concept to its Vast.ai equivalent and walks you through the transition. What is different: In a nutshell, you have more control. You choose your specific machine, see the exact specs you are getting, and set the reliability level you need, rather than relying on an opaque allocation. This guide addresses each of these differences head-on so there are no surprises.
What Runpod users need to know about Vast.ai:
  1. Your existing Runpod images may work as-is. Many Runpod-compatible Docker images run on Vast with little or no modification.
  2. You pick the individual machine, not just the GPU type. Every offer shows reliability score, network speed, CPU, location, and other critical specs. Two A100s at the same price can be very different machines. Vast gives you the data to choose the right one.
  3. Bandwidth billed separately. Providers set rates for bandwidth use per GB, which you can view in the price breakdown by hovering over the price of an instance offer. Filter for instances with lower bandwidth rates using the pricing filters on the search page. Note that both inbound and outbound traffic are billed, so pulling large models or datasets counts toward your bandwidth costs.
  4. Set your disk size right at launch. Resizing requires recreating the container. Storage is cheap, so err on the side of more space.
  5. Often lower prices for the same GPU performance. Marketplace competition drives prices down. You’ll frequently find the same hardware at lower rates than fixed-tier providers.

In This Guide

  1. Concept Mapping: Runpod terms → Vast equivalents
  2. Account Setup
  3. Migrating from Pods: instances, Docker config, storage, networking, SSH, logs, lifecycle/cost
  4. Migrating from Serverless: endpoints, PyWorker
  5. CLI Reference: full side-by-side table

Concept Mapping

RunpodVast.aiNotes
PodInstanceDocker container with exclusive GPU access
Serverless Endpoint -> WorkerServerless Endpoint -> Workergroup -> WorkerVast has managed autoscaling inference. See Migrating from Serverless
Community Cloud / Secure CloudVerified Machines (default) / Secure CloudVast defaults to verified machines; Secure Cloud filters to datacenter-grade hosts
TemplateTemplate / Docker imageSpecify a Docker image and configuration at launch
Volume Disk(Local) VolumeMachine-local storage that can attach to any GPU on the same physical node. See Storage
Network VolumeObject Storage (S3, R2, GCS)Vast has no cross-host network volume. For data that persists across machines, use object storage. See Storage
HubModel Library + Template LibraryOfficial templates for popular inference engines and applications, plus specific model configs through the model library
Pod APIVast REST API / vastai CLIFull programmatic control over instances
GPU Type selectorSearch filters (CUDA, VRAM, price/hr)Vast is a marketplace: you search and filter offers
On-Demand PodOn-Demand InstanceFixed pricing, guaranteed resources
Spot PodInterruptible InstanceYou set a max $/hr; higher-priority on-demand renters can displace you
Savings PlanReserved InstancePre-pay an on-demand instance for up to 50% discount
Runpod ConsoleVast ConsoleWeb UI for managing instances, billing, and templates

Account Setup

  1. Create an account at cloud.vast.ai
  2. Add credits. Similar to Runpod, Vast is prepaid. Add funds via the Billing page before renting.
  3. Add your SSH public key at cloud.vast.ai/manage-keys/. If you do not have one, generate it with ssh-keygen -t ed25519. Keys are applied at container creation time. If you forgot, use the SSH key button on the instance card to add one without recreating.
That’s all you need to get started via the console. If you plan to use the CLI or REST API, also:
  1. Generate an API key at API Keys and authenticate:
Vast CLI
pip install vastai
vastai set api-key <YOUR_API_KEY>

Migrating from Pods

A Runpod Pod is a Docker container running on a GPU-equipped machine. You pick a GPU type and a template (Docker image + config), and Runpod assigns you a machine from its managed fleet. The Vast.ai equivalent is an Instance: also a Docker container with exclusive GPU access, but rented from an open marketplace of independent hosts rather than a single provider. The core workflow is the same (pick a GPU, choose an image, launch), but how you find that GPU is different.

Finding and Creating Instances

Runpod presents a curated list of GPUs at set prices. Vast.ai is a marketplace: hosts list their machines with specs and asking prices, and you search through available offers using filters. Two A100 80GB offers at the same price can be on very different machines. Vast surfaces reliability scores, network speeds, and location for every offer so you can pick the right one for your workload. Always check these before renting:
  • Reliability score: historical uptime percentage. Look for 0.95+ for production workloads. If reliability is critical, filter for Secure Cloud machines only.
  • Network speed: inet_down and inet_up in Mbps. Matters for model downloads and data transfer.
  • Geolocation: filter by region for latency-sensitive workloads.
  1. Go to Search
  2. Use the GPU type, VRAM, reliability, and region filters to narrow results
  3. Review each offer’s reliability score and network speed before renting
  4. Click Rent on your chosen offer and configure image, disk, and Docker options in the dialog

Docker Environment

Images

If you have a working Runpod template, you likely already have a Docker image that works on Vast. Many Runpod-compatible images may run as-is. Just specify the image in the --image flag. To minimize cold start times:
  • Use Vast base images (also on DockerHub), which are pre-cached on many hosts
  • Use smaller, optimized images where possible
  • For very large images, build on top of a pre-cached base

Environment Variables

On Runpod, environment variables are set in the template UI or passed as a JSON object in the API. Vast works the same way.
In the instance creation dialog or template editor, set environment variables using the GUI fields or the Docker Options field with Docker syntax:
-e HF_TOKEN=hf_xxxxx -e MODEL_NAME=meta-llama/Llama-3-8B

Entrypoint Arguments

Runpod’s “Docker Command” field passes arguments to the container’s ENTRYPOINT ("dockerStartCmd" in the API). On Vast, use --args.
In the template editor or instance creation dialog, enter entrypoint arguments in the Docker Options field.

Startup Scripts

Vast has an on-start script that runs a shell command after the container starts. Runpod does not have a direct equivalent; the closest is baking the command into a custom Docker image.
In the template editor or instance creation dialog, enter your startup commands in the On-start Script field.

Converting a Runpod Template

Runpod templates bundle an image, environment, ports, and a Docker command into a reusable config. Here is how each field maps to Vast:
Runpod template fieldVast consoleVast CLI flag
Container Image (imageName)Image field--image
Container Disk (containerDiskInGb)Disk field--disk
Exposed Ports ("8000/http")Docker Options: -p 8000:8000--env "-p 8000:8000"
Environment Variables (env: {...})Docker Options: -e KEY=VALUE--env "-e KEY=VALUE"
Docker Command (dockerStartCmd)Docker Options (entrypoint args)--args
(no equivalent)On-start Script--onstart-cmd
Full example: a Runpod vLLM template converted to Vast:
curl -X POST "https://rest.runpod.io/v1/pods" \
    -H "Authorization: Bearer $RUNPOD_API_KEY" \
    -H "Content-Type: application/json" \
    -d '{
        "imageName": "vllm/vllm-openai:latest",
        "containerDiskInGb": 40,
        "ports": ["8000/http"],
        "env": {"HF_TOKEN": "hf_xxxxx"},
        "dockerStartCmd": ["--model", "meta-llama/Llama-3.1-8B-Instruct"]
    }'
Once you have a working configuration, save it as a reusable Vast template in the console. See the Templates guide for details.

Storage

For model weights and datasets, the recommended approach is to pull from object storage on boot. This works across any host and keeps your instance stateless:
  • Object storage (S3, R2, GCS): pull weights on boot. Most flexible, works across any host.
  • Cloud Sync: Vast’s built-in sync tool supports S3, Google Drive, Backblaze, and Dropbox. Access it via the console or vastai cloud copy. Docker instances only; use IAM credentials with S3 for bucket-scoped access rather than account-level credentials.
Vast local volumes are tied to the physical machine they were created on. For data that needs to persist across instances or move to a new host, use cloud object storage (S3, R2, GCS), a more portable and provider-agnostic approach than any single vendor’s proprietary network volume.

Networking and Ports

Both platforms provide proxy access to services. On Runpod, proxy URLs are static: https://<POD_ID>-<PORT>.proxy.runpod.net. On Vast, there are two proxy mechanisms:
  • HTTP/HTTPS proxy: instances using Vast base images get auto-generated Cloudflare tunnel URLs (https://four-random-words.trycloudflare.com) per open port via the Instance Portal. These tunnels are best-effort and may not always be available; for reliable HTTPS access, use direct connections or the built-in Jupyter certificate (see Jupyter / IDE Access).
  • SSH proxy: instances using SSH-compatible images support proxy SSH through Vast’s proxy server, which works even on machines without open ports. Direct SSH (faster) is preferred when available.
Instances also have direct access via a random external port on the host’s public IP, which you discover after launch.

Declaring Ports at Launch

In the instance creation dialog or template editor, enter port mappings in the Docker Options field:
-p 8000:8000 -p 8080:8080
Limits: Maximum 64 open ports per container. For a stable external port number, use internal ports above 70000, which are identity-mapped (the external port matches the internal port).

Discovering Your External Port

After the instance starts, click the Open Ports button on the instance card to see the external port mapping.
Vast also sets VAST_TCP_PORT_<N> environment variables inside the container for each mapped port. Use these in your application code to construct external URLs.

Connecting to Your Instance

SSH

On Runpod, you SSH into a pod using the connection info from the console. On Vast, SSH uses key-only authentication (make sure you’ve added your public key in Account Setup).
Click the SSH button on the instance card to see the full connection command, then paste it into your terminal.
Instances using Vast base images connect to a tmux session by default. Use Ctrl+B C to open a new window and Ctrl+B N to cycle between windows. To disable tmux, create ~/.no_auto_tmux inside the container. Port forwarding works the same as any SSH connection:
Bash
ssh -p <PORT> root@<VAST_IP> -L 8080:localhost:8080

Jupyter / IDE Access

Both platforms support JupyterLab. On Vast, Jupyter is available out of the box with two access modes:
  • Proxy mode (default): Click the Jupyter button in the Vast console. Works immediately, no setup needed. Uses Cloudflare tunnels, which are best-effort.
  • Direct HTTPS (recommended): Faster, more reliable connection that bypasses the proxy. Instances using Vast base images include a built-in Jupyter TLS certificate. Install the Vast TLS root certificate (jvastai_root.cer), downloadable from the Vast console, to connect directly over HTTPS without browser warnings.
Both platforms support VS Code / Cursor via Remote-SSH.

Logs

Click the Logs button on the instance card to view live output.

Instance Lifecycle and Cost

Start, Stop, and Destroy

Use the buttons on the instance card: Stop to pause compute, Destroy to delete the instance and stop all charges.
ActionCompute ChargesStorage ChargesData Preserved
StopStopContinueYes
DestroyStopStopNo
If you are done with an instance, destroy it. Stopped instances continue to cost money for storage. Only stop instances you plan to resume soon.

On-Demand vs Interruptible

TypeRunpod EquivalentPriceInterruption Risk
On-DemandOn-Demand PodStandard marketplace rateNone (guaranteed)
InterruptibleSpot PodOften significantly cheaperCan be displaced by on-demand renters
On the Search page, use the Instance Type toggle to switch between on-demand and interruptible offers before renting.
Use interruptible instances for batch inference, training with checkpointing, or any workload that can be restarted if interrupted. Save checkpoints frequently.

Reserved Instances

If you run an on-demand instance for days or weeks, convert it to a reserved instance for up to 50% savings. Reserved instances lock in a discounted rate in exchange for a time commitment:
Not all machines support reserved pricing. To find eligible machines before renting, go to Search and switch the On-demand filter to Reserved. After renting, go to the Instances page and click the green discount badge on your instance card to open the pre-payment dialog.
Reserved instances cannot migrate between hosts. If the host machine goes down, your reservation is tied to that machine.

Next Steps

Instances Overview

How Vast instances work: GPU access, billing, and connectivity

Pricing

How compute, storage, and bandwidth charges work

Migrating from Serverless

Runpod Serverless lets you deploy a handler function that scales to zero. You send a request, Runpod spins up a worker, runs your handler, and tears it down. You pay per second of compute, not for idle GPUs. Vast Serverless delivers autoscaling inference at marketplace rates, with per-second billing across 68+ GPU types globally. Vast handles routing, queueing, and autoscaling automatically. You can deploy using a pre-built template (vLLM, TGI, ComfyUI) or implement a custom handler with PyWorker, analogous to RunPod’s handler pattern. Pricing: Serverless workers run on the same marketplace instances you’d rent directly — you pay the same per-second rate, just with autoscaling on top.

Vast Serverless Architecture

The system has three layers:
ComponentPurpose
EndpointRoutes requests, manages autoscaling
WorkergroupDefines what code runs and how workers are recruited
WorkerIndividual GPU instance running your model via PyWorker

Deployment Options

Vast Serverless supports two deployment paths: PyWorker (Custom Handlers) is the core framework for building serverless workers. You implement a handler function in Python, analogous to RunPod’s handler pattern, with full control over request preprocessing, model loading, and response formatting. Most production deployments use custom PyWorker handlers. See the PyWorker documentation to get started. Pre-built Templates serve as ready-to-use starting points and examples. Vast provides templates for common frameworks (vLLM, TGI, ComfyUI) that you can deploy directly or use as a reference for building your own handlers.

Creating Endpoints and Workergroups

  1. Go to Serverless in the console
  2. Click New Endpoint and configure name, max workers, and scaling parameters
  3. Add a workergroup: choose a pre-built template (vLLM, TGI, ComfyUI) for quick setup, or configure a custom PyWorker by specifying your Docker image and PYWORKER_REPO environment variable

Calling Your Endpoint

Install the Vast SDK:
pip install vastai-sdk
import runpod
import os

runpod.api_key = os.environ["RUNPOD_API_KEY"]
endpoint = runpod.Endpoint("your-endpoint-id")

result = endpoint.run_sync({"input": {"prompt": "Explain quantum computing"}})
print(result["output"])

Next Steps

Serverless Quickstart

Deploy your first serverless endpoint

Serverless Pricing

Pay-as-you-go billing, cold workers, and endpoint suspension

CLI Reference

TaskRunpodVast CLI
AuthenticateAuthorization: Bearer <KEY>vastai set api-key <KEY>
Search GPUsGET /v1/pods/gpu-typesvastai search offers '<FILTERS>'
Create instancePOST /v1/podsvastai create instance <ID> --image img
List instancesGET /v1/podsvastai show instances
Show instanceGET /v1/pods/<ID>vastai show instance <ID>
Start instancePOST /v1/pods/<ID>/startvastai start instance <ID>
Stop instancePOST /v1/pods/<ID>/stopvastai stop instance <ID>
Destroy instanceDELETE /v1/pods/<ID>vastai destroy instance <ID>
View logs(console only)vastai logs <ID>
SSH connectionssh <ID>@ssh.runpod.iovastai ssh-url <ID>
Create endpointvastai create endpoint --endpoint_name "x"
Create workergroupvastai create workergroup --endpoint_name "x"
Reserve instancevastai prepay instance <ID> <AMOUNT>
For programmatic usage, the Vast CLI supports --raw for JSON output that can be parsed in scripts. See the CLI Reference for full documentation.

Next Steps