What is Tracing?

LLM tracing is a feature that allows you to trace your LLM requests and responses. It is a way to track the workflow of LLM calls and tools of your AI application.

How to set up Tracing?

1

Install the SDK

Install the package using your preferred package manager:

pip install keywordsai-tracing
2

Set up Environment Variables

Get your API key from the API Keys page in Settings, then configure it in your environment:

.env
KEYWORDSAI_BASE_URL="https://api.keywordsai.co/api"
KEYWORDSAI_API_KEY="YOUR_KEYWORDSAI_API_KEY"
3

Annotate your workflows

Use the @workflow and @task decorators to instrument your code:

Python
from keywordsai_tracing.decorators import workflow, task
from keywordsai_tracing.main import KeywordsAITelemetry

k_tl = KeywordsAITelemetry()

@workflow(name="my_workflow")
def my_workflow():
    @task(name="my_task")
    def my_task():
        pass
    my_task()
4

A full example with LLM calls

In this example, you will see how to implement a workflow that includes LLM calls. We use OpenAI SDK as an example.

main.py
from openai import OpenAI
from keywordsai_tracing.decorators import workflow, task
from keywordsai_tracing.main import KeywordsAITelemetry

k_tl = KeywordsAITelemetry()
client = OpenAI()

@workflow(name="create_joke")
def create_joke():
    completion = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Tell me a joke about opentelemetry"}],
        temperature=0.5,
        max_tokens=100,
        frequency_penalty=0.5,
        presence_penalty=0.5,
    )
    return completion.choices[0].message.content

You can now see your traces in the Traces and go to Logs to see the details of your LLM calls.