What is Tracing?
LLM tracing is a feature that allows you to trace your LLM requests and responses. It is a way to track the workflow of LLM calls and tools of your AI application.
How to set up Tracing?
Install the SDK
Install the package using your preferred package manager:
pip install keywordsai-tracing
Set up Environment Variables
Get your API key from the API Keys page in Settings, then configure it in your environment:
KEYWORDSAI_BASE_URL="https://api.keywordsai.co/api"
KEYWORDSAI_API_KEY="YOUR_KEYWORDSAI_API_KEY"
Annotate your workflows
Use the @workflow
and @task
decorators to instrument your code:
from keywordsai_tracing.decorators import workflow, task
from keywordsai_tracing.main import KeywordsAITelemetry
k_tl = KeywordsAITelemetry()
@workflow(name="my_workflow")
def my_workflow():
@task(name="my_task")
def my_task():
pass
my_task()
A full example with LLM calls
In this example, you will see how to implement a workflow that includes LLM calls. We use OpenAI SDK as an example.
from openai import OpenAI
from keywordsai_tracing.decorators import workflow, task
from keywordsai_tracing.main import KeywordsAITelemetry
k_tl = KeywordsAITelemetry()
client = OpenAI()
@workflow(name="create_joke")
def create_joke():
completion = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Tell me a joke about opentelemetry"}],
temperature=0.5,
max_tokens=100,
frequency_penalty=0.5,
presence_penalty=0.5,
)
return completion.choices[0].message.content
Install the SDK
Install the package using your preferred package manager:
pip install keywordsai-tracing
Set up Environment Variables
Get your API key from the API Keys page in Settings, then configure it in your environment:
KEYWORDSAI_BASE_URL="https://api.keywordsai.co/api"
KEYWORDSAI_API_KEY="YOUR_KEYWORDSAI_API_KEY"
Annotate your workflows
Use the @workflow
and @task
decorators to instrument your code:
from keywordsai_tracing.decorators import workflow, task
from keywordsai_tracing.main import KeywordsAITelemetry
k_tl = KeywordsAITelemetry()
@workflow(name="my_workflow")
def my_workflow():
@task(name="my_task")
def my_task():
pass
my_task()
A full example with LLM calls
In this example, you will see how to implement a workflow that includes LLM calls. We use OpenAI SDK as an example.
from openai import OpenAI
from keywordsai_tracing.decorators import workflow, task
from keywordsai_tracing.main import KeywordsAITelemetry
k_tl = KeywordsAITelemetry()
client = OpenAI()
@workflow(name="create_joke")
def create_joke():
completion = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Tell me a joke about opentelemetry"}],
temperature=0.5,
max_tokens=100,
frequency_penalty=0.5,
presence_penalty=0.5,
)
return completion.choices[0].message.content
Install the SDK
Install the package using your preferred package manager:
npm install @keywordsai/tracing
# or yarn
yarn add @keywordsai/tracing
Set up Environment Variables
Get your API key from the API Keys page in Settings, then configure it in your environment:
KEYWORDSAI_BASE_URL="https://api.keywordsai.co/api"
KEYWORDSAI_API_KEY="YOUR_KEYWORDSAI_API_KEY"
Create a simple task
import { KeywordsAITelemetry } from '@keywordsai/tracing';
import OpenAI from 'openai';
// Initialize clients
// Make sure to set these environment variables or pass them directly
const keywordsAi = new KeywordsAITelemetry({
apiKey: process.env.KEYWORDSAI_API_KEY || "",
appName: 'test-app',
disableBatch: true // For testing, disable batching
});
const openai = new OpenAI();
// This demonstrates a simple LLM call wrapped in a task
async function createJoke() {
return await keywordsAi.withTask(
{ name: 'joke_creation' },
async () => {
const completion = await openai.chat.completions.create({
messages: [{ role: 'user', content: 'Tell me a joke about TypeScript' }],
model: 'gpt-4o-mini',
temperature: 0.7
});
return completion.choices[0].message.content;
}
);
}
Create a workflow combining tasks
In this example, we create a workflow pirate_joke_workflow
that combines the createJoke
task with a translateJoke
task.
async function jokeWorkflow() {
return await keywordsAi.withWorkflow(
{ name: 'pirate_joke_workflow' },
async () => {
const joke = await createJoke();
const pirateJoke = await translateJoke(joke);
return pirateJoke;
}
);
}
You can now see your traces in the Traces and go to Logs to see the details of your LLM calls.