Human evals
Human annotation
A guide on how to use human annotation in Keywords AI.
Although LLMs are powerful, they are not perfect. They can make mistakes, and sometimes the mistakes are hard to detect. Human review is a way to ensure the quality of the LLM output and the accuracy of the AI evaluators.
Pass user feedback to the API
You can pass user feedback to the API as a positive_feedback
field in the request.
If you are using the Logging API, you can pass the positive_feedback
field in the request to record if the user liked the output.
positive_feedback
is a boolean field that indicates if the user liked the output.
After you pass the positive_feedback
field, you can see it in the side panel of the Logs.