Docs home page
Search...
⌘K
Get started
What is Keywords AI?
Quickstart
AI observability
Overview
Getting started
Metrics
Logging
Tracing
Threads
User analytics
Advanced
Prompt engineering
Overview
Prompt management
Prompt Playground
Evaluations
Overview
Experiments
LLM evals
Human evals
AI gateway
Overview
Supported models
Provider API keys
Load balancing
Retries
Fallback models
Rate limits
Caches
Prompt caching
Function calling
Image upload
Custom model
PDF support
Organization management
API keys management
Create a new team
Resources
What is LLM monitoring?
Automatic retries
How streaming works
Relari eval
Retrieval
LLM-based Context Precision
Text generation
Other
Discord
Get started
Docs home page
Search...
⌘K
Discord
Get started
Get started
Search...
Navigation
Retrieval
LLM-based Context Precision
Documentation
API reference
Integrations
Cookbooks
Changelog ↗
Documentation
API reference
Integrations
Cookbooks
Changelog ↗
Retrieval
LLM-based Context Precision
Copy page
Measure information density
Definition
Context Precision is used to measure information density.
See more at
Relari AI’s definition
Settings and parameters
Go to Keywords AI (on top of the left nav bar) > Evaluation > Retrieval > Context Precision
Click on the card to create the setting:
Click the enable switch to turn on the evaluation
Pick a LLM model you want to run the evaluation with
Hit the “Save” button.
Make an API call, and the evaluation will be run based on the
Ramdom sampling
setting.
Was this page helpful?
Yes
No
Suggest edits
Raise issue
Previous
Faithfulness
Next
On this page
Definition
Settings and parameters
Assistant
Responses are generated using AI and may contain mistakes.