All of Vast’s serverless templates use the Vast PyWorker. If you are using a recommended serverless template from Vast, the PyWorker is already integrated with the template and will automatically startup when a {{Worker_Group}} is created.

- Sends a
/route/
POST request to the serverless system. This asks the system for a GPU instance to send the inference request. - The serverless system selects a ready and available worker instance from the user’s endpoint and replies with a JSON object containing the URL of the selected instance.
- The client then constructs a new POST request with it’s payload, authentication data, and the URL of the worker instance. This is sent to the worker.
- The PyWorker running on that specific instance validates the request and extracts the payload. It then sends the payload to the model inference server, which runs on the same instance as the PyWorker.
- The model generates it’s output and returns the result to the PyWorker.
- The PyWorker formats the model’s response as needed, and sends the response back to the client.
- Independently and concurrently, the PyWorker periodically sends it’s operational metrics to the serverless system, which is used to make scaling decisions.

Integration with Model Instance
The Vast PyWorker wraps the backend code of the model instance you are running. The PyWorker calls the appropriate backend function when the PyWorker’s corresponding API endpoint is invoked. For example, if you are running a text generation inference (TGI) server, your PyWorker might receive the following JSON body from a/generate
endpoint:
JSON
JSON
Communication with Serverless
If you are building a custom PyWorker for your own use case, to be able to integrate with Vast’s serverless system, each backend must:- Send a message to the serverless system when the backend server is ready (e.g., after model installation).
- Periodically send performance metrics to the serverless system to optimize usage and performance.
- Report any errors to the serverless system.