Overview

The create() and acreate() methods allow you to create a new log entry to track conversations, API requests, and their associated metadata. These methods support comprehensive logging of LLM interactions with detailed parameters for monitoring and analysis. Use create() for synchronous operations and acreate() for asynchronous operations.

Usage example

from keywordsai.logs.api import LogAPI
import asyncio

# Create the client
log_api_client = LogAPI()

# Synchronous example
response = log_api_client.create({
    "model": "gpt-4",
    "prompt_messages": [{"role": "user", "content": "Hello, world!"}],
    "completion_message": {"role": "assistant", "content": "Hi there!"}
})
print(response["message"])  # "log successful"

# Asynchronous example
async def create_log_async():
    response = await log_api_client.acreate({
        "model": "gpt-4",
        "prompt_messages": [{"role": "user", "content": "Hello, world!"}],
        "completion_message": {"role": "assistant", "content": "Hi there!"}
    })
    print(response["message"])  # "log successful"

# Run the async function
asyncio.run(create_log_async())

Parameters

Core Parameters

model
string
Model used for the LLM inference. See the list of supported models here.
prompt_messages
array
Array of prompt messages in chat format.Message Structure:
  • role (string, required): The role of the message (system, developer, user, assistant, tool)
  • content (string, required): The content of the message
  • tool_call_id (string, optional): The tool call ID for tool messages
completion_message
dict
Completion message in JSON format containing the model’s response.Structure:
  • role (string): Usually “assistant”
  • content (string): The generated response content

Token and Cost Parameters

prompt_tokens
integer
Number of tokens in the prompt.
completion_tokens
integer
Number of tokens in the completion.
cost
float
default:0
Cost of the inference in US dollars.
prompt_unit_price
number
Unit price per token for prompt tokens (for self-hosted/fine-tuned models).
completion_unit_price
number
Unit price per token for completion tokens (for self-hosted/fine-tuned models).

Performance Metrics

generation_time
float
default:0
Total generation time in seconds. This is TTFT + TPOT × number of tokens.
ttft
float
default:0
Time to first token in seconds.

Model Configuration Parameters

temperature
number
default:1
Controls randomness in the output (0-2 range).
top_p
number
default:1
Nucleus sampling parameter for token selection.
frequency_penalty
number
Penalty for token frequency to reduce repetition.
presence_penalty
number
Penalty for token presence to encourage new topics.
stop
array[string]
Stop sequences for generation termination.
stream
boolean
default:false
Whether the LLM inference was streamed.

Tool and Function Parameters

tools
array
List of tools available to the model.Tool Structure:
  • type (string): Currently only “function” is supported
  • function (object): Function definition with name, description, and parameters
tool_choice
object
Controls which tool is called by the model.Structure:
  • type (string): “function”
  • function (object): Function specification with name

Response Format

response_format
object
Response format specification for structured outputs.Supported Types:
  • text: Default text response
  • json_object: JSON object response
  • json_schema: Structured JSON with schema validation

Customer and Identification Parameters

customer_params
object
Customer-related parameters for user tracking.Structure:
  • customer_identifier (string): Unique customer identifier
  • name (string, optional): Customer name
  • email (string, optional): Customer email
custom_identifier
string
Custom identifier for fast querying (indexed field).
group_identifier
string
Group identifier for organizing related logs.
thread_identifier
string
Unique identifier for conversation threads.

Prompt Management

prompt_id
string
ID of the prompt used. Set is_custom_prompt to true for custom prompt IDs.
prompt_name
string
Name of the prompt used.
is_custom_prompt
boolean
default:false
Whether the prompt is a custom prompt.

Status and Error Handling

status_code
integer
default:200
HTTP status code of the LLM inference. Supports all valid HTTP status codes.
error_message
string
Error message if the LLM inference failed.
warnings
string
Any warnings that occurred during the LLM inference.

Metadata and Additional Data

metadata
dict
Custom key-value pairs for additional context and filtering.
full_request
object
The complete request object including all configuration parameters.
positive_feedback
boolean
Whether the user liked the output (true = liked).

Usage and Caching

usage
object
Usage details including prompt caching information.Structure:
  • prompt_tokens_details (object): Contains cached_tokens count
  • cache_creation_prompt_tokens (integer): Cache creation tokens (Anthropic only)

API Control Parameters

keywordsai_api_controls
object
Controls for Keywords AI API behavior.Structure:
  • block (boolean, default=true): If false, returns immediately with status

Returns

Returns a dictionary containing the API response:
{
    "message": "log successful"
}
Note: The log create endpoint is designed for high throughput and returns only a success message, not the full log object. Use the list() or get() methods to retrieve detailed log information.

Examples

Basic Synchronous Example

import os
from dotenv import load_dotenv
from keywordsai.logs.api import LogAPI

load_dotenv()

def create_log_sync():
    """Basic synchronous log creation"""
    api_key = os.getenv("KEYWORDS_AI_API_KEY")
    log_api_client = LogAPI(api_key=api_key)
    
    log_params = {
        "model": "gpt-4",
        "prompt_messages": [
            {"role": "user", "content": "What are the benefits of renewable energy?"}
        ],
        "completion_message": {
            "role": "assistant", 
            "content": "Renewable energy offers environmental sustainability, cost savings, and energy independence."
        },
        "prompt_tokens": 15,
        "completion_tokens": 25,
        "cost": 0.0008,
        "generation_time": 0.8,
        "temperature": 0.7,
        "customer_params": {
            "customer_identifier": "user_12345",
            "name": "John Doe"
        },
        "custom_identifier": "renewable_energy_qa",
        "status_code": 200,
        "metadata": {"topic": "renewable_energy", "source": "web_chat"}
    }
    
    try:
        response = log_api_client.create(log_params)
        print(f"✓ Log created: {response['message']}")
        return response
    except Exception as e:
        print(f"✗ Error: {e}")
        return None

# Usage
create_log_sync()

Asynchronous Example

import asyncio
import os
from dotenv import load_dotenv
from keywordsai.logs.api import LogAPI

load_dotenv()

async def create_log_async():
    """Asynchronous log creation"""
    api_key = os.getenv("KEYWORDS_AI_API_KEY")
    log_api_client = LogAPI(api_key=api_key)
    
    log_params = {
        "model": "gpt-4",
        "prompt_messages": [
            {"role": "user", "content": "How does solar energy work?"}
        ],
        "completion_message": {
            "role": "assistant", 
            "content": "Solar energy works by converting sunlight into electricity using photovoltaic cells."
        },
        "prompt_tokens": 12,
        "completion_tokens": 18,
        "cost": 0.0006,
        "generation_time": 0.6,
        "temperature": 0.7,
        "customer_params": {
            "customer_identifier": "user_12345",
            "name": "John Doe"
        },
        "custom_identifier": "solar_energy_qa",
        "status_code": 200,
        "metadata": {"topic": "solar_energy", "source": "async_example"}
    }
    
    try:
        response = await log_api_client.acreate(log_params)
        print(f"✓ Async log created: {response['message']}")
        return response
    except Exception as e:
        print(f"✗ Async error: {e}")
        return None

# Usage
asyncio.run(create_log_async())

Convenience Functions

You can also use the convenience function to create a LogAPI client:
from keywordsai import create_log_client

client = create_log_client(api_key="your-api-key")
response = client.create(log_data)