Key Features
- No code redeployment: Update your graph behavior instantly without updating application code
- Test multiple variants: Upload different graph configurations and run A/B experiments to compare performance
- Target specific users: Serve variants based on user attributes
- Control rollouts: Gradually deploy changes with precise traffic distribution
Create an Experiment
To run an A/B test:- Register your graph to enable A/B testing without code redeployment
- Create and upload variants of your graphs
- Create targeting rules to control which users see what variant
- Launch your experiment and monitor performance
- Deploy the winning variant to all users
Register Your Graph
Registration enables your graph to use remote variants from Experiments instead of your local code configuration, allowing you to run A/B experiments without code redeployments.
- Click on Register Graph on the top right corner.
- Enter the graph ID that you specified when creating your graph via the SDK.
Node
- Click Register.

Create Variants
Variants are different graph configurations that share the same graph ID. Variants can have different:- Model choice: Change LLM (GPT-5, Mistral, Claude) or TTS/STT models
- Node configuration: Modify temperature, token limits, prompts, and other node parameters
- Graph topology: Add/remove/reorder built-in nodes while keeping same input/output interface
- Processing logic: Update edge conditions, add preprocessing steps, or modify data flow
- 
Generate variant JSON:
- Option 1: For smaller changes (e.g., changing a model), you can export your existing graph to JSON
and then modify the field directly in the JSONNodevariant_a.json
- Option 2: For more involved changes, we recommend creating your graph in your IDE and export it to a JSON file
 
- Option 1: For smaller changes (e.g., changing a model), you can export your existing graph to JSON
- 
Upload variant to Portal:
- Navigate to the Experiments tab and click on your graph record
- Click Create Variant
- Enter a descriptive name for the variant and upload the JSON file from Step 1 or 2
- Click Create to save the variant
 

Create Targeting Rules
Now that you have variants, you need to define targeting rules that control which users experience which variants. Each rule contains two key parts:- Targeting: Filters that define which users qualify for this rule based on the user attributes (e.g., country = "Canada"oruser_tier = "premium")
- Rollout: Traffic distribution between variants (e.g., 50% to “Control”, 30% to “Variant A”, 20% to “Variant B”)
- Rules are processed from top to bottom
- Users are assigned to the first rule they satisfy
- Users who don’t match any rules get whatever variant specified in the “Everyone else” catch-all rule at the bottom (this is called the default variant). Make sure to choose a variant for the “Everyone else” rule so you have a fallback!
Order matters! Put your most specific rules at the top and broader rules toward the bottom.
- In Portal, navigate to the Experiments tab. Click on the graph record, and select 2. Targeting & Rollout. Click + Rule. A targeting configuration panel will appear with two columns.
- 
Set up filters on user attributes
- Click Add Filters
- Enter an attribute name (e.g., user_tier,location,app_version, etc.)
- Select an operator from the dropdown (e.g., IN, NOT_IN, NUM_EQUAL, etc.)
- Enter the attribute values
 
- Set up the traffic distribution between variants (percentages must add up to 100%)
- Click Save in the top right corner
IMPORTANT: If you want traffic allocation to work as intended, when executing the graph, you MUST specify a targeting key within UserContext. If the targeting key is unspecified, all users will receive the same variant regardless of the traffic allocation you set.
Launch Your Experiment
Your targeting rules are disabled by default to prevent accidental traffic serving. When you’re ready to start your experiment:- Click the on the top right of the rule
- Select Enable
- Click Save in the top right corner
Deploy Winning Variants
Once you’ve identified a winning variant from your experiment results, you can deploy it to all users:- Navigate to the “Everyone else” rule at the bottom
- Set 100% traffic to your winning variant
- Delete or disable all other rules above this one
Manage Your Experiments
Once your experiments are running, you can manage them using the following operations. Disable Rules: Stop an Experiment To stop your experiment and return users to the default variant:- Click the on the top right of the rule
- Select Disable
- Click Save in the top right corner
- Click the on the top right of the rule
- Select Delete
- Click Save in the top right corner
The default rule cannot be disabled or deleted, ensuring all users always receive some variant.
- Hover over the on the top left of the rule
- Drag and drop to reorder
- Find the variant you want to delete in your graph’s variant list, and click on the on the right
- Click Delete
Variants currently receiving traffic cannot be deleted.
To enable deletion, ensure:
- the variant’s traffic allocation is 0% for all rules AND
- the variant is NOT the default variant in the last targeting rule

How it Works
When your graph executes, the runtime determines whether to use your local graph configuration or a remote variant based on two key factors:- Does the graph have remote config enabled? (Default = false)
- If yes, is the graph registered on Experiments and does it have variants to return?
enableRemoteConfig to true to use the remote Experiments variants:
Node
Decision Flow
Execution with Remote Config
If remote config is enabled, the runtime performs these steps at execution:- 
Check Local Cache: Runtime checks if it has a cached compiled graph for this user and graph ID
- Cache Hit: Execute immediately (~microseconds)
- Cache Miss: Proceed to Step 2
 
- 
Fetch Remote Variant: Query Experiments service with user context to get variant configuration
- Fetch Success: Proceed to Step 3
- Fetch Failure: Fall back to local graph configuration and execute
 
- Compile and Cache: Runtime compiles the fetched variant, caches the compiled graph, then executes (~milliseconds)
FAQ
How can I tell if my graph is using remote config or static config?
Check yourGraphBuilder configuration of enableRemoteConfig. If it’s not specified, it defaults to false (will use local configuration).
Node
How do I know if my Experiments are working?
Verify your Experiments are working by:- Check that enableRemoteConfig: trueis set in your graph configuration
- Confirm your graph ID is registered in Experiments
- Review your targeting rules to ensure they are enabled and configured correctly
- Pass different user attributes when executing the graph to test that variant selection is working
- Monitor application logs and look for any Experiments-related error messages
Why is my graph always using the local configuration instead of Experiments variants?
Check these common causes:- Missing API key — Graph won’t start without INWORLD_API_KEYenvironment variable
- Explicitly disabled — You have enableRemoteConfig: falsein your GraphBuilder constructor
- Graph not registered — Your graph ID is not registered in Experiments
- No active targeting rules — All targeting rules are disabled or no variants are configured
What variants can I upload to Experiments without having to redeploy code
Supported Changes:- Model switching: Change LLM providers (GPT-5, Mistral, Claude) or TTS/STT models
- Configuration updates: Modify temperature, token limits, prompts, and other node parameters
- Graph topology: Add/remove/reorder built-in nodes while keeping same input/output interface
- Processing logic: Update edge conditions, add preprocessing steps, or modify data flow
- New Component Types: Add/modify new component types
- New Node Types: Adding unregistered custom nodes (Note: updating a custom node’s configuration IS allowed as long as the custom node is already registered)
- Break Interface Preservation: Changing graph input/output interface
- Custom Edge Conditions: Use edge conditions beyond standard CEL expressions