What is traces?

Traces are a chained collection of workflows and tasks. You can use tree views and waterfalls to better track dependencies and latency.
Traces example

Integrate with your existing AI framework

Keywords AI integrates seamlessly with popular AI frameworks to give you complete observability into your agent workflows.

Supported frameworks

OpenAI Agent SDK

Vercel AI SDK

Mastra SDK

Keywords AI Native

Keywords AI native tracing

If you don’t want to choose any AI framework right now, you can also use Keywords AI native tracing for your AI agents without any additional setup. You just need to add the keywordsai-tracing package to your project and annotate your workflows.
1

Install the SDK

Install the package using your preferred package manager:
pip install keywordsai-tracing
2

Set up Environment Variables

Get your API key from the API Keys page in Settings, then configure it in your environment:
.env
KEYWORDSAI_BASE_URL="https://api.keywordsai.co/api"
KEYWORDSAI_API_KEY="YOUR_KEYWORDSAI_API_KEY"
3

Annotate your workflows

Use the @workflow and @task decorators to instrument your code:
Python
from keywordsai_tracing.decorators import workflow, task
from keywordsai_tracing.main import KeywordsAITelemetry

k_tl = KeywordsAITelemetry()

@workflow(name="my_workflow")
def my_workflow():
    @task(name="my_task")
    def my_task():
        pass
    my_task()
4

A full example with LLM calls

In this example, you will see how to implement a workflow that includes LLM calls. We use OpenAI SDK as an example.
main.py
from openai import OpenAI
from keywordsai_tracing.decorators import workflow, task
from keywordsai_tracing.main import KeywordsAITelemetry

k_tl = KeywordsAITelemetry()
client = OpenAI()

@workflow(name="create_joke")
def create_joke():
    completion = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Tell me a joke about opentelemetry"}],
        temperature=0.5,
        max_tokens=100,
        frequency_penalty=0.5,
        presence_penalty=0.5,
    )
    return completion.choices[0].message.content
You can now see your traces in the Traces and go to Logs to see the details of your LLM calls.