Skip to main content
POST
/
v1
/
traces
/
ingest
import requests

URL = "https://api.keywordsai.co/v1/traces/ingest"
headers = {
    "Authorization": f"Bearer {YOUR_KEYWORDS_AI_API_KEY}",
    "Content-Type": "application/json",
}

payload = [
    {
        "trace_unique_id": "a-trace-id",
        "span_unique_id": "root-span-id",
        "span_name": "pirate_joke_plus_audience_reactions.workflow",
        "span_parent_id": None,
        "timestamp": "2025-09-08T07:46:19.041835Z",
        "start_time": "2025-09-08T07:46:14.007279Z",
        "span_workflow_name": "pirate_joke_plus_audience_reactions",
        "span_path": "",
        "provider_id": "",
        "model": "python",
        "input": "{\"args\": [], \"kwargs\": {}}",
        "output": "\"python\"",
        "encoding_format": "float",
        "latency": 5.034556,
        "keywordsai_params": {"has_webhook": false, "environment": "prod"},
        "disable_log": false
    },
    {
        "trace_unique_id": "a-trace-id",
        "span_unique_id": "child-span-id",
        "span_name": "openai.chat",
        "span_parent_id": "root-span-id",
        "timestamp": "2025-09-08T07:46:14.617987Z",
        "start_time": "2025-09-08T07:46:14.007452Z",
        "span_workflow_name": "pirate_joke_generator",
        "span_path": "joke_creation",
        "provider_id": "openai",
        "model": "gpt-3.5-turbo",
        "input": "[{\"role\": \"assistant\", \"content\": \"Why did the opentelemetry developer go broke?\\n\\n\"}, {\"role\": \"user\", \"content\": \"Tell me a joke about opentelemetry\"}]",
        "output": "{\"role\": \"assistant\", \"content\": \"Why did the opentelemetry developer go broke?\\n\\n\"}",
        "prompt_messages": [
            {"role": "assistant", "content": "Why did the opentelemetry developer go broke?\\n\\n"},
            {"role": "user", "content": "Tell me a joke about opentelemetry"}
        ],
        "completion_message": {"role": "assistant", "content": "Why did the opentelemetry developer go broke?\\n\\n"},
        "encoding_format": "float",
        "prompt_tokens": 15,
        "completion_tokens": 10,
        "cost": 2.25e-05,
        "latency": 0.610535,
        "keywordsai_params": {"has_webhook": false, "environment": "prod"},
        "disable_log": false
    }
]

resp = requests.post(URL, json=payload, headers=headers)
print(resp.status_code)
print(resp.text)
{
  "status": "ok",
  "ingested_spans": 2,
  "created_traces": 1,
  "trace_ids": ["a-trace-id"],
  "errors": []
}
Ingest a batch of span logs to construct one or more traces. Use this to import historical data or programmatically build traces when SDK instrumentation isn’t feasible.
If you’re starting fresh, we recommend using an SDK/integration (e.g., OpenAI Agents SDK) to capture traces automatically. This endpoint is best for bulk import and migration workflows.
Example project: Github Link

Body

body
array
required
Array of span log objects. Each object corresponds to a span within a trace. Spans with the same trace_unique_id are grouped into a single trace. Parent-child relationships are inferred via span_parent_id. Aligns with the sample payload in the logs_to_trace example.
trace_unique_id
string
required
Unique identifier for the trace. All spans with this value are grouped together.
span_unique_id
string
required
Unique identifier for the span.
span_parent_id
string
Parent span ID; omit or set null for root spans.
span_name
string
Name of the span (e.g., “openai.chat”, “workflow.start”).
span_workflow_name
string
Nearest parent workflow name.
span_path
string
Nested path within the workflow (e.g., “joke_creation.store_joke”).
start_time
string
RFC3339 UTC start timestamp.
timestamp
string
RFC3339 UTC end/event timestamp.
latency
number
Latency in seconds for the span operation.
input
string
Raw input string or JSON serialized string used by the span.
output
string
Raw output string or JSON serialized string produced by the span.
model
string
Model name used by the span (e.g., “gpt-3.5-turbo”, “gpt-4o-mini”).
encoding_format
string
Embedding encoding format for spans that generate embeddings (e.g., “float”).
provider_id
string
LLM or service provider ID (e.g., “openai”).
prompt_tokens
integer
Number of prompt tokens used (if applicable).
completion_tokens
integer
Number of completion tokens used (if applicable).
cost
float
Cost associated with the span (if applicable).
metadata
object
Custom attributes as a key-value object.
warnings
string
Warnings or notes captured during span execution.
disable_log
boolean
Set true to disable logging for the span in observability system.
disable_fallback
boolean
Disable fallback behavior for the span if supported.
keywordsai_params
object
Additional Keywords AI parameters (e.g., has_webhook, environment).
temperature
number
Controls randomness for LLM spans; typical range 0.0–1.0.
presence_penalty
number
Presence penalty parameter used in some LLM requests.
frequency_penalty
number
Frequency penalty parameter used in some LLM requests.
max_tokens
integer
Maximum tokens requested for completion/embedding generations.
stream
boolean
Whether streaming was enabled for the span.
prompt_messages
array
Array of messages sent to the LLM (each with role and content). Present for chat spans.
completion_message
object
Assistant message returned by the LLM (role/content). Present for chat spans.
import requests

URL = "https://api.keywordsai.co/v1/traces/ingest"
headers = {
    "Authorization": f"Bearer {YOUR_KEYWORDS_AI_API_KEY}",
    "Content-Type": "application/json",
}

payload = [
    {
        "trace_unique_id": "a-trace-id",
        "span_unique_id": "root-span-id",
        "span_name": "pirate_joke_plus_audience_reactions.workflow",
        "span_parent_id": None,
        "timestamp": "2025-09-08T07:46:19.041835Z",
        "start_time": "2025-09-08T07:46:14.007279Z",
        "span_workflow_name": "pirate_joke_plus_audience_reactions",
        "span_path": "",
        "provider_id": "",
        "model": "python",
        "input": "{\"args\": [], \"kwargs\": {}}",
        "output": "\"python\"",
        "encoding_format": "float",
        "latency": 5.034556,
        "keywordsai_params": {"has_webhook": false, "environment": "prod"},
        "disable_log": false
    },
    {
        "trace_unique_id": "a-trace-id",
        "span_unique_id": "child-span-id",
        "span_name": "openai.chat",
        "span_parent_id": "root-span-id",
        "timestamp": "2025-09-08T07:46:14.617987Z",
        "start_time": "2025-09-08T07:46:14.007452Z",
        "span_workflow_name": "pirate_joke_generator",
        "span_path": "joke_creation",
        "provider_id": "openai",
        "model": "gpt-3.5-turbo",
        "input": "[{\"role\": \"assistant\", \"content\": \"Why did the opentelemetry developer go broke?\\n\\n\"}, {\"role\": \"user\", \"content\": \"Tell me a joke about opentelemetry\"}]",
        "output": "{\"role\": \"assistant\", \"content\": \"Why did the opentelemetry developer go broke?\\n\\n\"}",
        "prompt_messages": [
            {"role": "assistant", "content": "Why did the opentelemetry developer go broke?\\n\\n"},
            {"role": "user", "content": "Tell me a joke about opentelemetry"}
        ],
        "completion_message": {"role": "assistant", "content": "Why did the opentelemetry developer go broke?\\n\\n"},
        "encoding_format": "float",
        "prompt_tokens": 15,
        "completion_tokens": 10,
        "cost": 2.25e-05,
        "latency": 0.610535,
        "keywordsai_params": {"has_webhook": false, "environment": "prod"},
        "disable_log": false
    }
]

resp = requests.post(URL, json=payload, headers=headers)
print(resp.status_code)
print(resp.text)
{
  "status": "ok",
  "ingested_spans": 2,
  "created_traces": 1,
  "trace_ids": ["a-trace-id"],
  "errors": []
}
Prerequisites for successful ingestion:
  • Accurate timestamps for each span (start_time and/or timestamp) to preserve relative timing within a trace.
  • Properly assigned trace and span IDs that reflect the correct parent-child relationships (trace_unique_id groups spans into a single trace; span_parent_id links children to their parents).
I