Skip to main content 
This guide helps you choose the right tools and integration path based on how you use language models and agents. 
Logging, gateway, or tracing? VIDEO 
LLM logging 
Call the LLM yourself , and we help you monitor those calls. 
If you already have LLM API calls in your code and just want observability (prompt, response, latency, user tracking, etc.), use the Logging API . 
No latency 
Works with any LLM provider 
Fast setup with a single HTTP call 
 
👉 Quickstart  
LLM gateway 
We help you to handle LLM calls and everything.  
 
The Gateway is the easiest way to connect to major LLM providers. We can help you to handle rerouting, retries, load-balancing, caching, fallbacks, and etc.. 
50 - 150ms latency 
Support 250+ models 
Built-in cost tracking, retries, observability 
 
👉 Quickstart  
Agent tracing 
Track your agent workflows step by step 
 
Perfect for multi-step chains, working with popular AI agent frameworks. 
Native support for OpenAI Agent SDK , Mastra , and more 
Custom step tracing with our SDK 
Visualize execution trees, tools, retries, state changes 
 
👉 Quickstart  
Prompt management and evaluations 
Isolate prompts from code, collaborate with prompt engineers, and iterate faster. 
 
Version control for prompts 
Human + LLM evaluation pipelines 
Visual scoreboards, test sets, experiments 
 
👉 Quickstart  
Need help? Join our discord  — we’ll help you as best we can.