Docs home pagelight logodark logo
Get started
  • What is Keywords AI?
  • Quickstart
AI observability
  • Overview
  • Getting started
  • Metrics
  • Logs
  • Traces
  • Threads
  • User analytics
  • Advanced
Prompt engineering
  • Overview
  • Prompt management
  • Prompt Playground
Evaluations
  • Overview
  • LLM evals
  • Human evals
  • Testsets & Experiments
AI gateway
  • Overview
  • Supported models
  • Provider API keys
  • Load balancing
  • Retries
  • Fallback models
  • Rate limits
  • Caches
  • Prompt caching
  • Function calling
  • Custom model
  • PDF support
Organization management
  • API keys management
  • Create a new team
Resources
  • What is LLM monitoring?
  • Automatic retries
  • How streaming works
  • Relari eval
  • Discord
  • Get started
Docs home pagelight logodark logo
  • Discord
  • Get started
  • Get started
Documentation
API reference
Integrations
Cookbooks
Changelog ↗
Documentation
API reference
Integrations
Cookbooks
Changelog ↗
AI gateway

Rate limits

Was this page helpful?

Suggest editsRaise issue
Previous
CachesReduce latency and save LLM costs by caching LLM prompts and responses.
Next
linkedin
Powered by Mintlify
Assistant
Responses are generated using AI and may contain mistakes.