Overview
The create()
and acreate()
methods allow you to create a new log entry to track conversations, API requests, and their associated metadata. These methods support comprehensive logging of LLM interactions with detailed parameters for monitoring and analysis. Use create()
for synchronous operations and acreate()
for asynchronous operations.
Usage example
from keywordsai.logs.api import LogAPI
import asyncio
# Create the client
log_api_client = LogAPI()
# Synchronous example
response = log_api_client.create({
"model": "gpt-4",
"prompt_messages": [{"role": "user", "content": "Hello, world!"}],
"completion_message": {"role": "assistant", "content": "Hi there!"}
})
print(response["message"]) # "log successful"
# Asynchronous example
async def create_log_async():
response = await log_api_client.acreate({
"model": "gpt-4",
"prompt_messages": [{"role": "user", "content": "Hello, world!"}],
"completion_message": {"role": "assistant", "content": "Hi there!"}
})
print(response["message"]) # "log successful"
# Run the async function
asyncio.run(create_log_async())
Parameters
Core Parameters
Model used for the LLM inference. See the list of supported models here.
Array of prompt messages in chat format.Message Structure:
role
(string, required): The role of the message (system
, developer
, user
, assistant
, tool
)
content
(string, required): The content of the message
tool_call_id
(string, optional): The tool call ID for tool messages
Completion message in JSON format containing the model’s response.Structure:
role
(string): Usually “assistant”
content
(string): The generated response content
Token and Cost Parameters
Number of tokens in the prompt.
Number of tokens in the completion.
Cost of the inference in US dollars.
Unit price per token for prompt tokens (for self-hosted/fine-tuned models).
Unit price per token for completion tokens (for self-hosted/fine-tuned models).
Total generation time in seconds. This is TTFT + TPOT × number of tokens.
Time to first token in seconds.
Model Configuration Parameters
Controls randomness in the output (0-2 range).
Nucleus sampling parameter for token selection.
Penalty for token frequency to reduce repetition.
Penalty for token presence to encourage new topics.
Stop sequences for generation termination.
Whether the LLM inference was streamed.
List of tools available to the model.Tool Structure:
type
(string): Currently only “function” is supported
function
(object): Function definition with name, description, and parameters
Controls which tool is called by the model.Structure:
type
(string): “function”
function
(object): Function specification with name
Response format specification for structured outputs.Supported Types:
text
: Default text response
json_object
: JSON object response
json_schema
: Structured JSON with schema validation
Customer and Identification Parameters
Customer-related parameters for user tracking.Structure:
customer_identifier
(string): Unique customer identifier
name
(string, optional): Customer name
email
(string, optional): Customer email
Custom identifier for fast querying (indexed field).
Group identifier for organizing related logs.
Unique identifier for conversation threads.
Prompt Management
ID of the prompt used. Set is_custom_prompt
to true for custom prompt IDs.
Whether the prompt is a custom prompt.
Status and Error Handling
HTTP status code of the LLM inference. Supports all valid HTTP status codes.
Error message if the LLM inference failed.
Any warnings that occurred during the LLM inference.
Custom key-value pairs for additional context and filtering.
The complete request object including all configuration parameters.
Whether the user liked the output (true = liked).
Usage and Caching
Usage details including prompt caching information.Structure:
prompt_tokens_details
(object): Contains cached_tokens
count
cache_creation_prompt_tokens
(integer): Cache creation tokens (Anthropic only)
API Control Parameters
Controls for Keywords AI API behavior.Structure:
block
(boolean, default=true): If false, returns immediately with status
Returns
Returns a dictionary containing the API response:
{
"message": "log successful"
}
Note: The log create endpoint is designed for high throughput and returns only a success message, not the full log object. Use the list()
or get()
methods to retrieve detailed log information.
Examples
Basic Synchronous Example
import os
from dotenv import load_dotenv
from keywordsai.logs.api import LogAPI
load_dotenv()
def create_log_sync():
"""Basic synchronous log creation"""
api_key = os.getenv("KEYWORDS_AI_API_KEY")
log_api_client = LogAPI(api_key=api_key)
log_params = {
"model": "gpt-4",
"prompt_messages": [
{"role": "user", "content": "What are the benefits of renewable energy?"}
],
"completion_message": {
"role": "assistant",
"content": "Renewable energy offers environmental sustainability, cost savings, and energy independence."
},
"prompt_tokens": 15,
"completion_tokens": 25,
"cost": 0.0008,
"generation_time": 0.8,
"temperature": 0.7,
"customer_params": {
"customer_identifier": "user_12345",
"name": "John Doe"
},
"custom_identifier": "renewable_energy_qa",
"status_code": 200,
"metadata": {"topic": "renewable_energy", "source": "web_chat"}
}
try:
response = log_api_client.create(log_params)
print(f"✓ Log created: {response['message']}")
return response
except Exception as e:
print(f"✗ Error: {e}")
return None
# Usage
create_log_sync()
Asynchronous Example
import asyncio
import os
from dotenv import load_dotenv
from keywordsai.logs.api import LogAPI
load_dotenv()
async def create_log_async():
"""Asynchronous log creation"""
api_key = os.getenv("KEYWORDS_AI_API_KEY")
log_api_client = LogAPI(api_key=api_key)
log_params = {
"model": "gpt-4",
"prompt_messages": [
{"role": "user", "content": "How does solar energy work?"}
],
"completion_message": {
"role": "assistant",
"content": "Solar energy works by converting sunlight into electricity using photovoltaic cells."
},
"prompt_tokens": 12,
"completion_tokens": 18,
"cost": 0.0006,
"generation_time": 0.6,
"temperature": 0.7,
"customer_params": {
"customer_identifier": "user_12345",
"name": "John Doe"
},
"custom_identifier": "solar_energy_qa",
"status_code": 200,
"metadata": {"topic": "solar_energy", "source": "async_example"}
}
try:
response = await log_api_client.acreate(log_params)
print(f"✓ Async log created: {response['message']}")
return response
except Exception as e:
print(f"✗ Async error: {e}")
return None
# Usage
asyncio.run(create_log_async())
Convenience Functions
You can also use the convenience function to create a LogAPI client:
from keywordsai import create_log_client
client = create_log_client(api_key="your-api-key")
response = client.create(log_data)