Skip to main content
Vast.ai Serverless offers pay-as-you-go pricing for all workloads at the same rates as Vast.ai’s non-Serverless GPU instances. Each instance accrues cost on a per second basis. This guide explains how pricing works.

GPU Recruitment

As the Serverless engine takes requests, it will automatically scale its number of workers up or down depending on the incoming and forecasted demand. When scaling up, the engine searches over the Vast.ai marketplace for GPU instances that offer the best performance / price ratio. Once determined, the GPU instance(s) is recruited into the Serverless engine, and its cost ($/hr) is added to the running sum of all GPU instances running on your Serverless engine. As the request demand falls off, the engine will remove GPU instance(s) and your credit account immediatley stops being charged for those corresponding instance(s). Visit the Billing Help page to see details on GPU instance costs.

Suspending an Endpoint

When an Endpoint is suspended:
  • The Serverless Engine will no longer manage the GPU instances contained within the Endpoint.
  • GPU instances in this Endpoint will still be able to receive requests.

Stopping an Endpoint

Stopping an Endpoint will:
  • Cause the Serverless Engine to no longer manage the GPU instances contained within the Endpoint.
  • Put all existing GPU instances into the Inactive state.
An Inactive GPU instance will:
  • Not receive any work.
  • Not charge GPU compute costs.
  • Charge the user’s account for storage and bandwidth.

Billing by Instance State

The specific charges depend on the instance’s state:
StateGPU computeStorageBandwidth inBandwidth out
ReadyBilledBilledBilledBilled
LoadingBilledBilledBilledBilled
CreatingNot billedBilledBilledBilled
InactiveNot billedBilledBilledBilled
GPU compute refers to the per-second GPU rental charges. See the Billing Help page for rate details.