Use Cases
AI Text Generation

Quantized GGUF models (cloned)

7min
here's a step by step guide to running quantized llm models in multi part gguf format we will use unsloth's deepseek r1 q8 0 model as an example this model is very large and will require an 8xh200 machine configuration, but you can also follow this guide for much smaller models before moving on with the guide, setup your vast account and add credit review the quickstart guide to get familar with the service if you do not have an account with credits loaded llama cpp llama cpp is the recommended method for loading these models as it is able to directly load a split file of many parts without first merging them while it's easy to build llama cpp inside one of our instances, we will focus on running this model in the open webui template which contains a pre compiled cuda compatible versions of llama server and llama cli open webui template openwebui + ollama is one of our recommended templates while its default setup uses ollama as a backend, it can also access an openai compatible api and it has been pre configured to find one running on http //localhost 20000 a full guide to getting started with the openwebui template is available here ensure you have enough disk space and a suitable configuration for deepseek r1 q8 0 you'll need at least 800gb vram 700gb storage space the recommended configuration for this particular model is 8 x h200 with 750gb storage once you have loaded up the template, you'll need to open up a terminal where we will pull and then serve the model pulling the model you will want to download the models from the deepseek r1 q8 0 model hugging face repo to the /workspace/llama cpp/models directory on your instance we have included a script with the ollama + open webui template that you may use to easily download the models llama dl sh repo unsloth/deepseek r1 gguf version q8 0 this download will take some time as huggingface limits download speed, so even on an instance with very fast download speeds it may take up to an hour to completely download serving the model once the dowload has completed it's time to serve the model using the pre built llama server application again, from the terminal, type the following llama server \\ \ model /workspace/llama cpp/models/deepseek r1 q8 0/deepseek r1 q8 0 00001 of 00015 gguf \\ \ ctx size 8192 \\ \ n gpu layers 62 \\ \ port 20000 this command will load all of the model layers into gpu vram and begin serving the api at http //localhost 20000 once the model has finished loading to the gpu, it will be availabe directly from the openwebui interface in the model selector again, this may take some time to load and if you already have openwebui open then you may need to refresh the page building llama cpp if you prefer to build llama cpp yourself, you can simply run the following from any vast built template the recommended nvidia cuda template would be an ideal start apt get install libcurl4 openssl dev git clone https //github com/ggerganov/llama cpp cmake llama cpp b /tmp/llama cpp/build \\ dbuild shared libs=off dggml cuda=on dllama curl=on cmake build /tmp/llama cpp/build config release j clean first target llama quantize llama cli llama server llama gguf split these commands will build the llama quantize llama cli llama server and llama gguf split tools for advanced build instructions you should see the official documentation on github further reading please see the template readme for advanced template configuration, particularly if you would like to modify the template to make the llama server api available externally with authentication or via a ssh tunnel