Use Cases
AI Image Generation
Image Generation
29min
running image generation on vast ai a complete guide introduction this guide walks you through setting up and running image generation workloads on vast ai, a marketplace for renting gpu compute power whether you're using stable diffusion or other image generation models, this guide will help you get started efficiently prerequisites a vast ai account basic familiarity with image generation models (optional) read jupyter guide (optional) ssh client installed on your local machine and ssh public key added the keys section at cloud vast ai (optional) basic understanding of model management setting up your environment 1\ selecting the right template navigate to the templates tab to view available templates for image generation, we recommend searching for "sd web ui forge" among the recommended templates stable diffusion web ui forge template pre installed with latest sd web ui version popular extensions common models optimized settings for vast ai choose this template if you want a ready to use environment for image generation you need a user friendly web interface you want access to multiple models and extensions you're looking for an optimized setup edit the template and add/update key environment variables if needed \# core configuration auto update=false # auto update to latest release forge ref=latest # git reference for updates forge args="" # launch arguments \# authentication tokens cf tunnel token="" # cloudflare zero trust civitai token="" # access gated civitai models hf token="" # access gated huggingface models \# custom setup provisioning script="" # url to custom setup script important never save your template as public if you've included tokens in docker options or added your docker login password 2\ choosing an instance when selecting a gpu for image generation, consider gpu memory minimum 8gb for basic models 12gb+ recommended for larger models 24gb+ for advanced techniques (img2img, inpainting, etc ) gpu type rtx 3090, 4090 for best performance rtx 3080, 3080 ti for good balance a4000, a5000 for stability disk space minimum 50gb for base models 100gb+ recommended for multiple models consider ssd speed for model loading 3\ connecting to your instance the forge template provides multiple ways to access your instance ai dock landing page (recommended) click the "open" button in instances tab once the blue button says "open" on your instance you'll be automatically logged in to the ai dock landing page access forge and other management tools from there direct access basic authentication is enabled by default username vastai password check open button token value to find token echo $open button token in terminal api access curl x post https //\[instance ip] \[mapped port]/endpoint \\ h 'authorization bearer \<open button token>' \\ security setup https and token authentication enabled by default install tls certificate to avoid browser warnings configure via web enable https and web enable auth variables jupyter access for uploading/downloading you can access jupyter clicking on the jupyter button on the instance card to easily upload and download files working with models managing models default setup the template includes a default provisioning script that downloads base stable diffusion xl models popular extensions common configurations custom provisioning create your own setup by copy the default provisioning script by editing the sd web ui forge template and grabbing the value of provisioning script environment variable and downloading it modify it to download your preferred models extensions configurations upload to gist/pastebin edit the template and set provisioning script environment variable to the raw url example for adding more models \# navigate to models directory cd /workspace/stable diffusion webui/models/stable diffusion \# download new models (example) wget https //civitai com/api/download/models/\[model id] model organization keep your models organized /workspace/stable diffusion webui/models/ ├── stable diffusion/ # main models ├── lora/ # lora models ├── vae/ # vae files └── embeddings/ # textual inversions you can access jupyter clicking on the jupyter button on the instance card to easily upload and download files optimization tips performance settings access settings > performance in web ui enable xformers memory efficient attention use float16 precision when possible optimize vram usage based on your gpu batch processing for multiple images use batch count for variations use batch size for parallel processing monitor gpu memory usage memory management \# recommended settings for different gpu sizes 8gb gpu \ max batch count 4 \ max batch size 2 12gb gpu \ max batch count 6 \ max batch size 3 24gb+ gpu \ max batch count 10 \ max batch size 5 advanced features custom scripts place custom scripts in /workspace/stable diffusion webui/scripts/ extensions management popular extensions are pre installed add more via web ui extensions tab install from url restart ui to apply api usage enable api in settings \# add to config json { "api" { "enable api" true, "api auth" false } } troubleshooting common issues and solutions out of memory (oom) reduce batch size lower resolution enable optimization settings slow generation check gpu utilization verify model loading consider switching to half precision connection issues use listen flag for network access check instance status verify network settings best practices workflow management save prompts for reuse use version control for custom scripts document model combinations resource optimization monitor costs in billing tab use appropriate batch sizes clean up unused models quality control maintain prompt libraries document successful settings track model performance cost optimization instance selection compare gpu prices consider spot instances monitor usage patterns storage management remove unused models archive generated images use efficient formats additional resources vast ai documentation https //vast ai/docs/ stable diffusion web ui wiki https //github com/automatic1111/stable diffusion webui/wiki civitai models https //civitai com/ conclusion running image generation workloads on vast ai provides a cost effective way to access powerful gpus by following this guide and best practices, you can efficiently set up and manage your image generation pipeline while optimizing costs and performance