Guide
Logging
LLM inference
A single API endpoint to access 200+ LLMs and log your LLM requests.
Async logging
Log your LLM requests without any latency.
OpenAI SDK logging
Switch from OpenAI SDK to Keywords AI with 2 lines of code.
Anthropic logging
Switch from Anthropic SDK to Keywords AI with 2 lines of code.
LangChain logging
Switch from LangChain SDK to Keywords AI with 2 lines of code.
Vercel SDK logging
Switch from Vercel SDK to Keywords AI with 2 lines of code.
Unified LLM API
Integrate 200+ LLMs
Connect to over 200 best-in-class models through a single, unified API.
Enable fallback models
Guarantee uptime with fallback models
Load balancing
Boost LLM rate limits and reliability.
Custom metadata
Easily track and annotate your data with your custom metadata.
API keys management
Centralize all API keys, enabling access control and user permissions.
LLM monitoring
Usage dashboard
A graphical view for monitoring your usage, performance, and cost.
Logs
Debug, test, and improve LLM outputs with detailed metrics.
User analytics
Insights into your users’ behavior and usage.
Alerts
Subscribe to the system status and get notified when an outage occurs.
Webhooks
Get notified when a request is completed.
Prompt testing
LLM playground
Test the API with your prompts on different models and compare the results.
Prompt management
Iterating and versioning prompts as a team.
Evaluation & improvement
Evaluations
Evaluate your AI performance based on built-in or custom metrics.
Datasets
Create high-quality golden datasets for model fine-tuning.