from requests import posturl = "https://api.keywordsai.co/api/run-evaluations/"body = { "evals_to_run": ["Conciseness"], # replace this with the name of your own eval "evaluation_identifier":"test_eval"}api_key = "YOUR_API_KEY"headers = { "Authorization": f"Bearer {api_key}",}response = post(url, json=body, headers=headers)print(response.json())
You can run batch evaluations on specific metrics using this endpoint. Instead of running evaluations manually in the front-end, you could just set up evaluations_identifier in the request body and run evaluations on the logs.
When you are making a call to the chat/completions endpoint, you will need to pass in the evaluation_identifier to mark the logs you want to run evals on:
Example
Copy
{"evaluation_identifier":"test_eval"}
ChatCompletions
Copy
url = "https://api.keywordsai.co/api/chat/completions"headers = { "Authorization": f"Bearer {api_key}",}api_key = "YOUR_API_KEY"body = { ...other params... "evalutation_identifier":"some_identifier" # you need to pass this in the body!}
A list of the names of the params you defined in the custom evals page that you want to run, the one in the example looks like this in the UI
Example
Copy
{"evals_to_run": ["Conciseness"]}
Copy
from requests import posturl = "https://api.keywordsai.co/api/run-evaluations/"body = { "evals_to_run": ["Conciseness"], # replace this with the name of your own eval "evaluation_identifier":"test_eval"}api_key = "YOUR_API_KEY"headers = { "Authorization": f"Bearer {api_key}",}response = post(url, json=body, headers=headers)print(response.json())
from requests import posturl = "https://api.keywordsai.co/api/run-evaluations/"body = { "evals_to_run": ["Conciseness"], # replace this with the name of your own eval "evaluation_identifier":"test_eval"}api_key = "YOUR_API_KEY"headers = { "Authorization": f"Bearer {api_key}",}response = post(url, json=body, headers=headers)print(response.json())
Assistant
Responses are generated using AI and may contain mistakes.