Experiment create
Experiment-create by PostHog is a comprehensive AI-powered tool designed to help product teams efficiently set up and manage A/B test experiments. It streamlines the experiment creation process by guiding users through hypothesis definition, metric selection, event tracking, variant configuration, and targeting, ensuring data-driven decision-making.
Features
- Guided setup for defining experiment goals and hypotheses.
- Automated search and suggestion of reusable feature flags or creation of new keys.
- Advanced metric selection including mean, funnel, and ratio types with event integration.
- Flexible variant configuration with customizable rollout percentages.
- Targeting options to segment user groups and exclusion of internal test accounts.
Benefits
- Speeds up the creation of robust A/B tests with step-by-step guidance.
- Ensures accurate and relevant metric tracking aligned with user goals.
- Reduces errors by integrating real-time event data and existing feature flags.
- Improves experiment quality through precise user targeting and variant control.
- Supports data-driven optimization to boost product performance and conversion rates.
Description
Create a comprehensive A/B test experiment. PROCESS: 1) Understand experiment goal and hypothesis 2) Search existing feature flags with 'feature-flags-get-all' tool first and suggest reuse or new key 3) Help user define success metrics by asking what they want to optimize 4) MOST IMPORTANT: Use 'event-definitions-list' tool to find available events in their project 5) For funnel metrics, ask for specific event sequence (e.g., ['product_view', 'add_to_cart', 'purchase']) and use funnel_steps parameter 6) Configure variants (default 50/50 control/test unless they specify otherwise) 7) Set targeting criteria if needed.
Parameters
12 parameters
| Name | Type | Description |
|---|---|---|
| namerequired | string | Experiment name - should clearly describe what is being tested |
| description | string | Detailed description of the experiment hypothesis, what changes are being tested, and expected outcomes |
| feature_flag_keyrequired | string | Feature flag key (letters, numbers, hyphens, underscores only). IMPORTANT: First search for existing feature flags that might be suitable using the feature-flags-get-all tool, then suggest reusing existing ones or creating a new key based on the experiment name |
| type | enum | Experiment type: 'product' for backend/API changes, 'web' for frontend UI changes |
| primary_metrics | object[] | Primary metrics to measure experiment success. IMPORTANT: Each metric needs event_name to track data. For funnels, provide funnel_steps array with event names for each step. Ask user what events they track, or use project-property-definitions to find available events. |
| secondary_metrics | object[] | Secondary metrics to monitor for potential side effects or additional insights. Each metric needs event_name. |
| variants | object[] | Experiment variants. If not specified, defaults to 50/50 control/test split. Ask user how many variants they need and what each tests |
| minimum_detectable_effect | number | Minimum detectable effect in percentage. Lower values require more users but detect smaller changes. Suggest 20-30% for most experiments |
| filter_test_accounts | boolean | Whether to filter out internal test accounts |
| target_properties | object | Properties to target specific user segments (e.g., country, subscription type) |
| draft | boolean | Create as draft (true) or launch immediately (false). Recommend draft for review first |
| holdout_id | number | Holdout group ID if this experiment should exclude users from other experiments |