POST
/
api
/
chat
/
completions

OpenAI parameters

To use Keywords AI parameters, you can pass them in the extra_body parameter.

messages
array
required

List of messages to send to the endpoint in the OpenAI style, each of them following this format:

messages=[
  {"role": "system", // Available choices are user, system or assistant
   "content": "You are a helpful assistant."
  },
  {"role": "user", "content": "Hello!"}
]

Image processing: If you want to use the image processing feature, you need to use the following format to upload the image.

model
string
required

Specify which model to use. See the list of model here

This parameter will be overridden by the loadbalance_models parameter.
stream
boolean
default: false

Whether to stream back partial progress token by token

tools
array[dict]

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide an array of functions the model may generate JSON inputs for.

tool_choice
dict

Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.

none is the default when no tools are present. auto is the default if tools are present.

Specifying a particular tool via the code below forces the model to call that tool.

{
  "type": "function",
  "function": {"name": "name_of_the_function"},
}
frequency_penalty
number

Specify how much to penalize new tokens based on their existing frequency in the text so far. Decreases the model’s likelihood of repeating the same line verbatim

max_tokens
number

Maximum number of tokens to generate in the response

temperature
number
default: 1

Controls randomness in the output in the range of 0-2, higher temperature will a more random response.

n
number
default: 1

How many chat completion choices are generated for each input message.

Caveat! While this can help improve generation quality by picking the optimal choice, this could also lead to more token usage.

logprobs
boolean
default: false

Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.

echo
boolean

Echo back the prompt in addition to the completion

stop
array[string]

Stop sequence

presence_penalty
number

Specify how much to penalize new tokens based on whether they appear in the text so far. Increases the model’s likelihood of talking about new topics

logit_bias
dict

Used to modify the probability of tokens appearing in the response

response_format
object

An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106.

Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON.

You must have a “json” as a keyword in the prompt to use this feature.

parallel_tool_calls
boolean

Whether to enable parallel function calling during tool use.

Keywords AI parameters

See how to make a standard Keywords AI API call in the Quick Start guide.

Generation parameters

load_balance_group
object

Balance the load of your requests between different models. See the details of load balancing here.

The proxy will pick one model from the group and override the model parameter

fallback_models
array

Specify the list of backup models (ranked by priority) to respond in case of a failure in the primary model. See the details of fallback models here.

customer_credentials
object

You can pass in your customer’s credentials for supported providers and use their credits when our proxy is calling models from those providers.
See details here

credential_override
object

One-off credential overrides. Instead of using what is uploaded for each provider, this targets credentials for individual models.

Go to provider page to see how to add your own credentials and override them for a specific model.

cache_enabled
boolean

Enable or disable caches. Check the details of caches here.

cache_ttl
number

This parameter specifies the time-to-live (TTL) for the cache in seconds.

It’s optional and the default value is 30 days now.
cache_options
boolean

This parameter specifies the cache options. Currently we support cache_by_customer option, you can set it to true or false. If cache_by_customer is set to true, the cache will be stored by the customer identifier.

It’s an optional parameter
{
    "cache_options": { // optional
        "cache_by_customer": true // or false
    }
}
prompt
object

The prompt template to use for the completion. You can build and deploy prompts in the Prompt.

disable_log
boolean

When set to true, only the request and performance metrics will be recorded, input and output messages will be omitted from the log.

model_name_map
object

This parameter is for Azure deployment only!!
We understand that you may have a custom name for your Azure deployment. Keywords AI is using the model’s origin name which may not be able to match your deployment. You can use this parameter to map the default name to your custom name.

models
array

Specify the list of models for the Keywords AI LLM router to choose between. If not specified, all models will be used. See the list of models here

If only one model is specified, it will be treated as if the model parameter is used and the router will not trigger.

When the model parameter is used, the router will not trigger, and this parameter behaves as fallback_models.

exclude_providers
array
default: []

The list of providers to exclude from the LLM router’s selection. All models under the provider will be excluded. See the list of providers here

This only excludes providers in the LLM router, if model parameter takes precedence over this parameter, andfallback_models and safety net will still use the excluded models to catch failures.

exclude_models
array
default: []

The list of models to exclude from the LLM router’s selection. See the list of models here

This only excludes models in the LLM router, if model parameter takes precedence over this parameter, andfallback_models and safety net will still use the excluded models to catch failures.

Observability parameters

metadata
dict

You can add any key-value pair to this metadata field for your reference. Check the details of metadata here.

Contact team@keywordsai.co if you need extra parameter support for your use case.

customer_identifier
string

Use this as a tag to identify the user associated with the API call. See the details of customer identifier here.

customer_email
string

This is the email address of the user associated with the API call. You can add your corresponding user’s email address to the request.

You could also edit customer’s emails on the platform. Check the details of user editing here.

thread_identifier
string

See logs as a conversation log thread. Pass all logs with the same thread_identifier to see them in the same thread.

request_breakdown
boolean
default: false

Adding this returns the summarization of the response in the response body. If streaming is on, the metrics will be streamed as the last chunk.

Deprecated parameters

customer_api_keys
object

You can pass in a dictionary of your customer’s API keys for specific models. If the router selects a model that is in the dictionary, it will attempt to use the customer’s API key for calling the model before using your integration API key or Keywords AI’s default API key.

{
  "gpt-3.5-turbo": "your_customer_api_key",
  "gpt-4": "your_customer_api_key"
}
loadbalance_models
array

Balance the load of your requests between different models. See the details of load balancing here.

This parameter will override the model parameter.