Load balancing
Increase your LLM rate limits with our load balancing feature.
Load balancing is a feature that allows you to balance the request load across different models or deployments. You could specify weights for each model/deployment based on their rate limit and your preference.
See all supported params here.
Load balancing between models
You could specify the load balancing weights for different models. This is useful when you want to balance the load between different models from different providers.
Go to the Load balancing page
Go to the Load balancing page and click on Create load balancing group
Add models
Click Add model
to add models and specify the weight for each model and add your own credentials.
Copy group ID to your codebase
After you have added the models, copy the group ID (the blue text) to your codebase and use it in your requests.
model
parameter will overwrite the load_balance_group
! Add load balancing group in code (Optional)
You could also add the load balancing group in your codebase directly.
The models
field will overwrite the load_balance_group
you specified in the UI.
You could also set up fallback models to avoid errors. It will fall back to the list of models you specified in the fallback
field once have any outages. Check out the Fallbacks section for more information.
Load balancing between deployments
In the platform
You could go to the platform and add multiple deployments for the same provider. You could specify the load balancing weights for each deployment, which could be helpful when you want to enhance rate limits for a single provider.
In the codebase
You could also load balance between deployments in your codebase. You can add different deployments in the customer_credentials
field and specify the weight for each deployment.
Example:
In this example, requests to OpenAI models will be evenly distributed between the two deployments based on their specified weights.
Specify available models
You could also specify the available models for load balancing. This is useful when you want to specify the models you want to load balance. For example, if you only want to use gpt-3.5-turbo
in an OpenAI deployment, you could specify it in the available_models
field or do it in the platform.
Learn more about how to specify available models in the platform here.
Example code:
In this example, requests to OpenAI models will be distributed between the two deployments, with the first deployment only handling gpt-3.5-turbo requests and explicitly excluding gpt-4, while the second deployment can handle requests for any OpenAI model.
Based on the deployment weights and model configurations:
- GPT-3.5-turbo requests are evenly split (50/50) between both deployments
- GPT-4 requests are routed exclusively to the second deployment since it’s excluded from the first
- All other model requests are distributed evenly between deployments according to their weights
Deprecated params
The loadbalance_models
parameter is deprecated. You should use the load_balance_group
parameter instead.