Overview
An LLM log is a record of an LLM request. It includes the prompt, the response, and the metadata associated with the request.
In Keywords AI, you can see every LLM log’s metrics like Messages
, Model
, Provider
, User
, API key
, Prompt
, Response
, Cost
, Duration
, Status
, and Timestamp
.
LLM logs Filters Full-text search
Integrate with your existing AI framework
Keywords AI’s logging is API only now, so you can use any AI framework or SDK of your choice (OpenAI, Anthropic, etc.) and simply call our logging API after receiving responses from your LLM provider.
How to use Keywords AI logging API
1. Get your Keywords AI API key
After you create an account on Keywords AI , you can get your API key from the API keys page .
2. Integrate Async Logging into your codebase
import requests
url = "https://api.keywordsai.co/api/request-logs/create/"
payload = {
"model" : "claude-3-5-sonnet-20240620" ,
"prompt_messages" : [
{
"role" : "user" ,
"content" : "Hi"
},
],
"completion_message" : {
"role" : "assistant" ,
"content" : "Hi, how can I assist you today?"
},
"cost" : 0.00042 ,
"generation_time" : 5.7 ,
"ttft" : 3.1 ,
"customer_params" : {
"customer_identifier" : "customer_123" ,
"name" : "Hendrix Liu" ,
"email" : "hendrix@keywordsai.co"
}
}
headers = {
"Authorization" : "Bearer YOUR_KEYWORDS_AI_API_KEY" ,
"Content-Type" : "application/json"
}
response = requests.request( "POST" , url, headers = headers, json = payload)
After you integrate the async logging into your codebase and send the request successfully, you can check your logs on the Logs page .
4. Parameters
Check out the Logging endpoint page to see all supported parameters.
Parameters like: cost
, completion_tokens
, and prompt_tokens
will be automatically calculated if your model is supported. Check out our models page to see the list of supported models.
Example with OpenAI SDK
The following example demonstrates how to integrate the Keywords AI Logging API with the OpenAI SDK.
1. Basic OpenAI Implementation
This is a typical OpenAI SDK implementation:
import OpenAI from "openai" ;
const openai = new OpenAI ();
async function main () {
const completion = await openai . chat . completions . create ({
messages: [
{
role: "assistant" ,
content: "You are a helpful assistant."
},
{
role: "user" ,
content: "Hello!" ,
}
],
model: "gpt-4o" ,
});
console . log ( completion . choices [ 0 ]);
}
main ();
2. Add Implementation with Logging
Here’s the same implementation with Keywords AI logging added:
import OpenAI from "openai" ;
const openai = new OpenAI ();
async function main () {
const startTime = Date . now ();
const completion = await openai . chat . completions . create ({
messages: [
{
role: "assistant" ,
content: "You are a helpful assistant."
},
{
role: "user" ,
content: "Hello!" ,
}
],
model: "gpt-4o"
});
const endTime = Date . now ();
const generationTime = ( endTime - startTime ) / 1000 ; // Convert to seconds
// Log to Keywords AI
await fetch ( 'https://api.keywordsai.co/api/request-logs/create/' , {
method: 'POST' ,
headers: {
'Authorization' : 'Bearer YOUR_KEYWORDS_AI_API_KEY' ,
'Content-Type' : 'application/json'
},
body: JSON . stringify ({
model: "gpt-4o" ,
prompt_messages: [
{
role: "assistant" ,
content: "You are a helpful assistant."
},
{
role: "user" ,
content: "Hello!" ,
}
],
completion_message: completion . choices [ 0 ]. message ,
generation_time: generationTime
// other Keywords AI parameters
})
});
console . log ( completion . choices [ 0 ]);
}
main ();