If you’re starting fresh, we recommend using an SDK/integration (e.g., OpenAI Agents SDK) to capture traces automatically. This endpoint is best for bulk import and migration workflows.
Example project: Github Link
Body
Array of span log objects. Each object corresponds to a span within a trace. Spans with the same
trace_unique_id
are grouped into a single trace. Parent-child relationships are inferred via span_parent_id
. Aligns with the sample payload in the logs_to_trace example.Unique identifier for the trace. All spans with this value are grouped together.
Unique identifier for the span.
Parent span ID; omit or set null for root spans.
Name of the span (e.g., “openai.chat”, “workflow.start”).
Nearest parent workflow name.
Nested path within the workflow (e.g., “joke_creation.store_joke”).
RFC3339 UTC start timestamp.
RFC3339 UTC end/event timestamp.
Latency in seconds for the span operation.
Raw input string or JSON serialized string used by the span.
Raw output string or JSON serialized string produced by the span.
Model name used by the span (e.g., “gpt-3.5-turbo”, “gpt-4o-mini”).
Embedding encoding format for spans that generate embeddings (e.g., “float”).
LLM or service provider ID (e.g., “openai”).
Number of prompt tokens used (if applicable).
Number of completion tokens used (if applicable).
Cost associated with the span (if applicable).
Custom attributes as a key-value object.
Warnings or notes captured during span execution.
Set true to disable logging for the span in observability system.
Disable fallback behavior for the span if supported.
Additional Keywords AI parameters (e.g., has_webhook, environment).
Controls randomness for LLM spans; typical range 0.0–1.0.
Presence penalty parameter used in some LLM requests.
Frequency penalty parameter used in some LLM requests.
Maximum tokens requested for completion/embedding generations.
Whether streaming was enabled for the span.
Array of messages sent to the LLM (each with role and content). Present for chat spans.
Assistant message returned by the LLM (role/content). Present for chat spans.
Prerequisites for successful ingestion:
- Accurate timestamps for each span (start_time and/or timestamp) to preserve relative timing within a trace.
- Properly assigned trace and span IDs that reflect the correct parent-child relationships (trace_unique_id groups spans into a single trace; span_parent_id links children to their parents).