Hosting
VMs
13min
warning vms interface much more directly with hardware than docker containers proper vm support is very sensitive to hardware setup this guide covers the configuration steps needed to enable support for vast vms on most setups, but is not and cannot be exhausitve introduction vast now supports vm instances running on kernel virtual machine (kvm) in addition to docker container based instances vm support is currently an optional feature for hosts as it usually requires additional configuration steps on top of those needed to support docker based instances host machines are not required to be vm compatible; the vast hosting software will automatically test and enable the feature on machines on which vms are supported on new machines the tests will be run on install; for machines configured before the vm feature release, testing for vm compatability will happen when the machine is unoccupied machines that do not have vm support enabled will be hidden in the search page for clients who have vm based templates selected vm support benefits/drawbacks benefits vm support will allow your machine to take advantage of demand for use cases that docker cannot easily support, in addition to demand for conventional docker based instances vms support the following features/use cases that docker based instances do not feature use cases systemd/docker multi application server tooling and devops (e g , docker compose, kubernetes, docker build) non linux oses windows graphics (e g , for rendering or cloud gaming) ptrace program analysis for cuda performance optimization (e g , via nvidia nsight) currently no other peer to peer gpu rental marketplace offers full vms; instead full vms are only available from traditional providers at much higher costs thus we believe that hosts who have vms enabled can expect to command a substantial preumium drawbacks due to greater user control over hardware, vm support requires iommu settings for securing pcie communications that can degrade the performance of nccl on non rtx 40x0 multi gpu machines that rely on pci based gpu peer to peer communication vms require more disk space than docker containers as they do not share components with the host os hosts with vms enabled may want to set higher disk and internet bandwidth prices summary we recommend all hosts with single gpu rigs to try to ensure vm support as the drawbacks for single gpu machines are minimal we also generally recommend multi gpu hosts with rtx 40x0 series gpus try enabling vms, especially if they have plentiful disk space and fast (500mbps+) internet speed, as rendering/gaming users will benefit from those, as well as users who need multi application orchestration tools we do not recommend multi gpu hosts with datacenter gpus enable vms until we can ensure better gpu p2p communication support in vms, including support for nvlink configuring vms on your machine checking vm enablement status run python3 /var/lib/vastai kaalia/enable vms py check possible results are on vms are enabled on your machine off vms are disabled on your machine either you disabled vms or our previous tests failed pending vms are not disabled, but will try to enable once the machine is idle disabling vms to prevent vms from being enabled on your machine, or to disable vms after they have been enabled, run python3 /var/lib/vastai kaalia/enable vms py off note that default configuration settings for most machines will not support vms, and we can detect that, so most hosts who do not want vms enabled do not need to take any action configuring your machine to support vms hardware prerequisites you will require a cpu and a chipset that support intel vt d or amd vi configure bios check that virtualization is enabled in your bios on most machines, this should be enabled by default configure kernel commandline arguments for further reference refer to preparing the iommu https //ubuntu com/server/docs/gpu virtualization with qemu kvm#preparing the input output memory management unit iommu we will need to ensure iommu, a technology that secures and isolates communication between pcie devices, is set up, along with disabling all driver features that interfere with vms open /etc/default/grub and add to the grub cmdline linux= the following amd iommu=on or intel iommu=on depending on whether you have an amd or intel cpu nvidia drm modeset=0 some hosts may also need to add the following settings rd driver blacklist=nouveau modprobe blacklist=nouveau then run sudo update grub and reboot disable display managers/background gpu processes if you have a display manager (e g , gdm) or display server (xorg, wayland, etc) running, you must disable them you may not run any background gpu processes for vms to work ( nvidia persitenced is ok, it is managed by our hosting software) enabling vms we will check/test your configuration when your machine is idle and enable vms by default if your machine is capable of supporting vms, and you have not set vms to off if you have vms set to off, and you'd like to retry enabling vms, run sudo python3 /var/lib/vastai kaalia/enable vms py on f while your machine is idle