Overview
Thecreate
method allows you to create a new experiment with multiple variants to test. This is the foundation for running A/B tests and comparing different approaches.
Method Signature
Synchronous
Asynchronous
Parameters
Parameter | Type | Required | Description |
---|---|---|---|
name | str | Yes | The name of the experiment |
description | str | No | Description of the experiment |
variants | List[Dict] | Yes | List of variants to test |
metrics | List[str] | No | Metrics to track (default: standard metrics) |
traffic_split | Dict[str, float] | No | Traffic distribution between variants |
metadata | Dict[str, Any] | No | Custom metadata for the experiment |
Returns
Returns a dictionary containing the created experiment information.Examples
Basic A/B Test
Experiment with Custom Metrics
Experiment with Custom Traffic Split
Multi-Variant Experiment
Experiment with Metadata
Asynchronous Creation
Model Comparison Experiment
Batch Experiment Creation
Error Handling
Experiment Structure
A created experiment contains:id
: Unique experiment identifiername
: Experiment namedescription
: Optional descriptionvariants
: List of variants being testedmetrics
: Metrics being trackedtraffic_split
: Traffic distributionstatus
: Current status (draft)metadata
: Custom metadatacreated_at
: Creation timestampcreated_by
: User who created the experiment
Best Practices
- Use descriptive names that indicate the experiment purpose
- Define clear hypotheses in the description or metadata
- Ensure variants are truly different and testable
- Choose appropriate metrics for your goals
- Plan traffic split based on risk tolerance
- Include success criteria in metadata
Common Use Cases
- A/B testing different prompt variations
- Comparing model performance
- Testing new features with limited traffic
- Optimizing for specific metrics
- Multi-variant testing for complex scenarios
- Progressive rollout experiments