Overview
Thecreate()
and acreate()
methods allow you to create a new log entry to track conversations, API requests, and their associated metadata. These methods support comprehensive logging of LLM interactions with detailed parameters for monitoring and analysis. Use create()
for synchronous operations and acreate()
for asynchronous operations.
Usage example
Parameters
Core Parameters
Array of prompt messages in chat format.Message Structure:
role
(string, required): The role of the message (system
,developer
,user
,assistant
,tool
)content
(string, required): The content of the messagetool_call_id
(string, optional): The tool call ID for tool messages
Completion message in JSON format containing the model’s response.Structure:
role
(string): Usually “assistant”content
(string): The generated response content
Token and Cost Parameters
Number of tokens in the prompt.
Number of tokens in the completion.
Cost of the inference in US dollars.
Unit price per token for prompt tokens (for self-hosted/fine-tuned models).
Unit price per token for completion tokens (for self-hosted/fine-tuned models).
Performance Metrics
Total generation time in seconds. This is TTFT + TPOT × number of tokens.
Time to first token in seconds.
Model Configuration Parameters
Controls randomness in the output (0-2 range).
Nucleus sampling parameter for token selection.
Penalty for token frequency to reduce repetition.
Penalty for token presence to encourage new topics.
Stop sequences for generation termination.
Whether the LLM inference was streamed.
Tool and Function Parameters
List of tools available to the model.Tool Structure:
type
(string): Currently only “function” is supportedfunction
(object): Function definition with name, description, and parameters
Controls which tool is called by the model.Structure:
type
(string): “function”function
(object): Function specification with name
Response Format
Response format specification for structured outputs.Supported Types:
text
: Default text responsejson_object
: JSON object responsejson_schema
: Structured JSON with schema validation
Customer and Identification Parameters
Customer-related parameters for user tracking.Structure:
customer_identifier
(string): Unique customer identifiername
(string, optional): Customer nameemail
(string, optional): Customer email
Custom identifier for fast querying (indexed field).
Group identifier for organizing related logs.
Unique identifier for conversation threads.
Prompt Management
ID of the prompt used. Set
is_custom_prompt
to true for custom prompt IDs.Name of the prompt used.
Whether the prompt is a custom prompt.
Status and Error Handling
HTTP status code of the LLM inference. Supports all valid HTTP status codes.
Error message if the LLM inference failed.
Any warnings that occurred during the LLM inference.
Metadata and Additional Data
Custom key-value pairs for additional context and filtering.
The complete request object including all configuration parameters.
Whether the user liked the output (true = liked).
Usage and Caching
Usage details including prompt caching information.Structure:
prompt_tokens_details
(object): Containscached_tokens
countcache_creation_prompt_tokens
(integer): Cache creation tokens (Anthropic only)
API Control Parameters
Controls for Keywords AI API behavior.Structure:
block
(boolean, default=true): If false, returns immediately with status
Returns
Returns a dictionary containing the API response:list()
or get()
methods to retrieve detailed log information.