How to log prompt variables

Via LLM Gateway

If you are using the LLM Gateway, you should first set up your prompts in the code. Then we will log the LLM requests and variables automatically. Example: When you created a prompt like this:
Sample prompt in Keywords AI
Then deploy your prompt in the code:
"model": "gpt-4o-mini",
"prompt": {
    "prompt_id": "09IqVI", // prompt ID in the code
    "variables": {"language": "Python","task_description": "Square a number", "specific_library": "math"},
    "version": 15 // optional, you can specify a version of the prompt you want to use
}
Once you send a request through the LLM Gateway, you will see the prompt variables in the side panel.
Prompt variables in logs

Via Logging ingestion

When you make a request through the Logging ingestion, you can send the prompt variables in the prompt_messages field. Just simply wrap your prompt variables in pairs of {{}}. Example:
"prompt_messages": [
    {
    "content": "Please develop an optimized Python function to {{task_description}}, utilizing {{specific_library}}, include error handling, and write unit tests for the function.",
    "role": "user"
    }
],
"variables": {
    "task_description": "Square a number",
    "specific_library": "math"
}

Why you should log prompt variables

  1. Easy to read: Logged prompt variables appear in the side panel, so you can quickly inspect the content of each variable without digging into the full prompt.
Prompt variables in logs
  1. Easy to add to a testset: Logged variables make it simple to turn real-world logs into test cases. With one click, you can add them to a testset and start iterating on your prompts with realistic data.