Prompt logging
Monitor a prompt in production and get detailed metrics.
Overview
Prompt logging gives you visibility into how your prompts perform in real-world applications. Track usage patterns, identify issues, and make data-driven improvements to your AI interactions.
Why monitor prompts?
- Measure performance metrics: Track token usage, request volume, latency, and error rates to understand your prompt’s efficiency.
- Compare version performance: Identify your best-performing prompt variants with side-by-side metric comparisons.
- Analyze request distribution: See exactly how your LLM traffic is distributed across different prompts.
Quickstart
You need to first create a prompt in Keywords AI, and find the prompt ID in Prompts.
Although you might already defined the configuration for a prompt like model, temperature, etc, you should still pass those parameters in the payload.
You don’t need to pass things like token-related parameters, we’ll calculate them for you, but you need to pass time-related parameters like generation time, ttft, etc.
Variables logging
When you make a request through the Logging API, you can send the prompt variables in the prompt_messages
field. Just simply wrap your prompt variables in pairs of {{}}
.
Learn how to use the Logging API here.
Example:
External prompt logging
If you dont want to create any prompt in Keywords AI but you still want to log your prompts, you can pass your prompt ID in the prompt_id
field, and set is_custom_prompt
to true
so the system knows it’s a custom prompt.
Example code
You can then see the prompt metrics in Dashboard and Logs.
Was this page helpful?