Log your LLM requests and responses asynchronously
LLM observability is the comprehensive process of monitoring, evaluating, and gaining insights into the performance and activities of Large Language Models in real-time. LLM observability is crucial for several reasons:
Keywords AI provides an Async Logging API that allows you to log your LLM requests and responses asynchronously, which offers complete observability of your LLM applications and won’t disrupt your application’s performance.
After you create an account on Keywords AI, you can get your API key from the API keys page.
After you integrate the async logging into your codebase and send the request successfully, you can check your logs on the Logs page.
Check out the Logging endpoint page to see all supported parameters.
Parameters like: cost
, completion_tokens
, and prompt_tokens
will be automatically calculated if your model is supported. Check out our models page to see the list of supported models.