LLM Proxy
OpenAI parameters
To use Keywords AI parameters, you can pass them in the extra_body
parameter.
List of messages to send to the endpoint in the OpenAI style, each of them following this format:
Image processing: If you want to use the image processing feature, you need to use the following format to upload the image.
Specify which model to use. See the list of model here
loadbalance_models
parameter.Whether to stream back partial progress token by token
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide an array of functions the model may generate JSON inputs for.
Controls which (if any) tool is called by the model. none
means the model will not call any tool and instead generates a message. auto
means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.
none
is the default when no tools are present. auto
is the default if tools are present.
Specifying a particular tool via the code below forces the model to call that tool.
Specify how much to penalize new tokens based on their existing frequency in the text so far. Decreases the model’s likelihood of repeating the same line verbatim
Maximum number of tokens to generate in the response
Controls randomness in the output in the range of 0-2, higher temperature will a more random response.
How many chat completion choices are generated for each input message.
Caveat! While this can help improve generation quality by picking the optimal choice, this could also lead to more token usage.
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content
of message
.
Echo back the prompt in addition to the completion
Stop sequence
Specify how much to penalize new tokens based on whether they appear in the text so far. Increases the model’s likelihood of talking about new topics
Used to modify the probability of tokens appearing in the response
An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106.
Setting to { "type": "json_object" }
enables JSON mode, which guarantees the message the model generates is valid JSON.
You must have a “json” as a keyword in the prompt to use this feature.
Whether to enable parallel function calling during tool use.
Keywords AI parameters
See how to make a standard Keywords AI API call in the Quick Start guide.
Generation parameters
Balance the load of your requests between different models. See the details of load balancing here.
model
parameter Specify the list of backup models (ranked by priority) to respond in case of a failure in the primary model. See the details of fallback models here.
You can pass in your customer’s credentials for supported providers and use their credits when our proxy is calling models from those providers.
See details here
One-off credential overrides. Instead of using what is uploaded for each provider, this targets credentials for individual models.
Go to provider page to see how to add your own credentials and override them for a specific model.
Enable or disable caches. Check the details of caches here.
This parameter specifies the time-to-live (TTL) for the cache in seconds.
This parameter specifies the cache options. Currently we support cache_by_customer
option, you can set it to true
or false
. If cache_by_customer
is set to true
, the cache will be stored by the customer identifier.
The prompt template to use for the completion. You can build and deploy prompts in the Prompt.
Enable or disable retries and set the number of retries and the time to wait before retrying. Check the details of retries here.
When set to true, only the request and performance metrics will be recorded, input and output messages will be omitted from the log.
Specify the list of models for the Keywords AI LLM router to choose between. If not specified, all models will be used. See the list of models here
If only one model is specified, it will be treated as if the model
parameter is used and the router will not trigger.
When the model
parameter is used, the router will not trigger, and this parameter behaves as fallback_models
.
The list of providers to exclude from the LLM router’s selection. All models under the provider will be excluded. See the list of providers here
This only excludes providers in the LLM router, if model
parameter takes precedence over this parameter, andfallback_models
and safety net will still use the excluded models to catch failures.
The list of models to exclude from the LLM router’s selection. See the list of models here
This only excludes models in the LLM router, if model
parameter takes precedence over this parameter, andfallback_models
and safety net will still use the excluded models to catch failures.
Observability parameters
You can add any key-value pair to this metadata field for your reference. Check the details of metadata here.
Contact team@keywordsai.co if you need extra parameter support for your use case.
Pass the customer’s parameters in the API call to monitor the user’s data in the Keywords AI platform. See how to get insights into your users’ data here
Use this as a tag to identify the user associated with the API call. See the details of customer identifier here.
Adding this returns the summarization of the response in the response body. If streaming is on, the metrics will be streamed as the last chunk.
Deprecated parameters
You can pass in a dictionary of your customer’s API keys for specific models. If the router selects a model that is in the dictionary, it will attempt to use the customer’s API key for calling the model before using your integration API key or Keywords AI’s default API key.
Balance the load of your requests between different models. See the details of load balancing here.
model
parameter.