Use Cases
GPU Programming
CUDA
20min
cuda programming on vast ai introduction this guide walks you through setting up and running cuda applications on vast ai's cloud platform you'll learn how to set up a cuda development environment, connect to your instance, and develop cuda applications efficiently using nvidia's development tools prerequisites a vast ai account basic familiarity with cuda programming concepts basic knowledge of linux command line (optional) install tls certificate for jupyter (optional) ssh client installed on your local machine and ssh public key added the keys section at cloud vast ai (optional) vast cli installed on your local machine for command line management (optional) docker knowledge for customizing development environments setup 1\ selecting the right template navigate to the templates tab to view recommended templates search for nvidia cuda template if you need a standard cuda development environment you want pre configured security features (tls, authentication) you require jupyter notebook integration you need additional development tools like tensorboard make a custom cuda template if you need a specific cuda or python version you have special library requirements you want to minimize image size for faster instance startup 2\ edit the template and select template you can edit the template to use jupyter launch mode if you're behind a corporate firewall that blocks ssh you prefer browser based development you want persistent terminal sessions that survive browser disconnects you need quick access without ssh client setup you want to combine cuda development with notebook documentation you plan to switch between multiple terminal sessions in the browser you can edit the template to use ssh launch mode if you're using vscode remote ssh or other ide integrations you need lowest possible terminal latency you prefer using your local terminal emulator you want to use advanced terminal features like tmux you're doing extensive command line development you need to transfer files frequently using scp or rsync 2\ create your instance select your desired gpu configuration based on your computational needs from the search tab for cuda development, consider system requirements ram minimum 16gb for development tools storage 10gb is usually sufficient cuda toolkit core 2gb development files and builds 3 4gb room for source code and dependencies 4gb cpu 4+ cores recommended for compilation network 100+ mbps for remote development rent the gpu of your choice 3\ connecting to your instance go to instances tab to see your instance being created there are multiple ways to connect to your instance if jupyter launch mode is selected in your template click the "open" button or "jupyter" button on your instance card access a full development environment with notebook support if you selected ssh launch mode click open terminal access button copy direct ssh connect string contents that looks like this "ssh p 12345 root\@123 456 789 10 l 8080\ localhost 8080" you take the ssh command and execute in your terminal in your mac or linux based computer or in powershell you can use powershell or windows putty tools if you have a windows computer installation setting up your development environment the base environment includes cuda toolkit and development tools python with common ml libraries development utilities (gcc, make, etc ) install additional cuda dependencies apt get update apt get install y cuda samples configuring your workspace navigate to your workspace cd ${workspace} set up cuda environment variables echo 'export path=/usr/local/cuda/bin $path' >> / bashrc echo 'export ld library path=/usr/local/cuda/lib64 $ld library path' >> / bashrc source / bashrc troubleshooting common issues and solutions cuda not found check if gpu is detectable nvidia smi nvidia smi if output like "no devices were found" shows up, report the machine after clicking on the wrench icon and rent a different machine best practices development workflow code organization keep source files in ${workspace} use version control for code management maintain separate directories for builds and source performance optimization use proper cuda stream management optimize memory transfers profile code using nvidia tools advanced topics custom environment setup create a provisioning script for custom environment setup \#!/bin/bash /venv/main/bin/activate pip install additional packages wget custom tools tar gz remote development setup configure vs code or other ides for remote development use ssh port forwarding for secure connections configure development tools to use remote cuda compiler set up source synchronization using syncthing conclusion you now have a fully configured cuda development environment on vast ai this setup provides the flexibility of cloud gpu resources with the convenience of local development additional resources nvidia cuda documentation https //docs nvidia com/cuda/ vast ai documentation https //vast ai/docs/ cuda sample projects https //github com/nvidia/cuda samples