Overview
The Keywords AI Tracing SDK can automatically instrument popular LLM libraries, capturing all API calls without manual tracing code.Supported Libraries
| Library | Package | Status |
|---|---|---|
| OpenAI | openai | ✅ Supported |
| Anthropic | @anthropic-ai/sdk | ✅ Supported |
Setup
OpenAI Instrumentation
Anthropic Instrumentation
Multi-Provider Instrumentation
What Gets Traced
OpenAI
- Chat Completions:
openai.chat.completions.create() - Streaming:
openai.chat.completions.create({ stream: true }) - Embeddings:
openai.embeddings.create() - Images:
openai.images.generate()
- Model name
- Messages/prompts
- Response content
- Token usage
- Latency
- Errors
Anthropic
- Messages:
anthropic.messages.create() - Streaming:
anthropic.messages.create({ stream: true })
- Model name
- Messages
- Response content
- Token usage
- Latency
- Errors
Configuration Options
Disable Specific Instrumentation
No Instrumentation
Manual Tracing with Auto-Instrumentation
You can combine auto-instrumentation with manual tracing:Streaming Support
Auto-instrumentation works with streaming:Error Tracking
Auto-instrumentation captures errors:Best Practices
- Always pass the library class (not an instance) to
instrumentModules - Initialize auto-instrumentation before creating SDK instances
- Combine auto-instrumentation with manual tracing for complete visibility
- Auto-instrumentation captures all SDK calls within traced contexts
- Use manual tracing for business logic around LLM calls
- Auto-instrumentation has minimal performance overhead
Troubleshooting
Instrumentation Not Working
Ensure you:- Pass the class to
instrumentModules(e.g.,OpenAI, notopenai) - Call
initialize()before creating SDK instances - Wrap calls in
withWorkflow,withTask,withAgent, orwithTool - Use the latest version of the Keywords AI Tracing SDK