Run an LLM evaluator in code
In this guide, we will show you how to add an LLM evaluator in the code.
Prerequisites
- You have already created an LLM evaluator. Learn how to create an LLM evaluator here.
- You have already set up the Logging API or LLM proxy. Learn how to set up the Logging API here or LLM proxy here.
Evaluate LLM output with Logging API
Once you have set up the Logging API, you can evaluate the LLM output with the Logging API.
Required parameters
To run an evaluation successfully, you need to pass the following parameters:
completion_message
: The completion message from the LLM.eval_params
: The parameters for the evaluator.evaluators
: The parameter under theeval_params
.evaluator_slug
: The parameter under theevaluators
, this is the slug of the evaluator you want to run.
Learn how to create an evaluator here.
Example code
Add ideal output in the code
ideal_output
can be also said as the expected output. You can add it to let the evaluator know what is the expected/right output.
You need to add ideal_output
to the description of the evaluator in the UI. Otherwise, the evaluator will not use the ideal_output
from the code.
To add it in the code, you need to pass the ideal_output
under the eval_inputs
as shown in the example code below:
Add multiple evaluators in the code
You can add multiple evaluators in the code. The evaluators will be run in parallel.
See the result
You can go to Logs and see the result of the evaluation in the side panel.

Evaluate LLM output with LLM proxy
If you have set up the LLM proxy, you can evaluate the LLM output with the LLM proxy as well.
Required parameters
To run an evaluation successfully, you need to pass the following parameters:
eval_params
: The parameters for the evaluator.evaluators
: The parameter under theeval_params
.evaluator_slug
: The parameter under theevaluators
, this is the slug of the evaluator you want to run.
Learn how to create an evaluator here.
Example code
Add ideal output in the code
ideal_output
can be also said as the expected output. You can add it to let the evaluator know what is the expected/right output.
You need to add ideal_output
to the description of the evaluator in the UI. Otherwise, the evaluator will not use the ideal_output
from the code.
To add it in the code, you need to pass the ideal_output
under the eval_inputs
as shown in the example code below:
Add multiple evaluators in the code
You can add multiple evaluators in the code. The evaluators will be run in parallel.
How to run evals in other LLM SDKs
Evaluations parameters are similar to other Keywords AI parameters. Go to Integrations and find the LLM SDK you want to use.
For example, if you are using the OpenAI Python SDK, you can pass the evaluations parameters in extra_body
.
See the result
You can go to Logs and see the result of the evaluation in the side panel.

Was this page helpful?