Skip to main content
Use Experiments to launch A/B tests of different graph variants and deploy the winning variant.

Key Features

  • No code redeployment: Update your graph behavior instantly without updating application code
  • Test multiple variants: Upload different graph configurations and run A/B experiments to compare performance
  • Target specific users: Serve variants based on user attributes
  • Control rollouts: Gradually deploy changes with precise traffic distribution

Create an Experiment

To run an A/B test:
  1. Register your graph to enable A/B testing without code redeployment
  2. Create and upload variants of your graphs
  3. Create targeting rules to control which users see what variant
  4. Launch your experiment and monitor performance
  5. Deploy the winning variant to all users

Register Your Graph

Registration enables your graph to use remote variants from Experiments instead of your local code configuration, allowing you to run A/B experiments without code redeployments.
To use Experiments, you first need to register your graph:
  1. Click on Register Graph on the top right corner.
  2. Enter the graph ID that you specified when creating your graph via the SDK.
    Node
    // Graph ID is my-graph-id
    const myGraph = new GraphBuilder({
      id: "my-graph-id",
      apiKey: process.env.INWORLD_API_KEY,
      enableRemoteConfig: true, // set true to allow code-free experiments
    })
    
  3. Click Register.
register-graph.gif

Create Variants

Variants are different graph configurations that share the same graph ID. Variants can have different:
  • Model choice: Change LLM (GPT-5, Mistral, Claude) or TTS/STT models
  • Node configuration: Modify temperature, token limits, prompts, and other node parameters
  • Graph topology: Add/remove/reorder built-in nodes while keeping same input/output interface
  • Processing logic: Update edge conditions, add preprocessing steps, or modify data flow
To create a new graph variant:
  1. Generate variant JSON:
    • Option 1: For smaller changes (e.g., changing a model), you can export your existing graph to JSON
      Node
      import { GraphBuilder, RemoteLLMChatNode } from '@inworld/runtime/graph';
      import * as fs from 'fs';
      // Create the graph
      const graph = new GraphBuilder({id: 'my-graph', apiKey: process.env.INWORLD_API_KEY})
        .addNode(new RemoteLLMChatNode())
        .build();
      // Export to JSON file 
      fs.writeFileSync('variant_a.json', graph.toJSON());
      
      and then modify the field directly in the JSON
      variant_a.json
      {
        "schema_version": "1.0.0",
        "main": {
          "id": "node_llm_chat_graph",
          "nodes": [
            {
              "id": "RemoteLLMChatNode",
              "type": "LLMChatNode",
              "execution_config": {
                "type": "LLMChatNodeExecutionConfig",
                "properties": {
                  "llm_component_id": "RemoteLLMChatNode_llm_component",
                  "text_generation_config": {},
                  "stream": true,
                  "report_to_client": false
                }
              }
            }
          ],
          "edges": [],
          "end_nodes": [
            "RemoteLLMChatNode"
          ],
          "start_nodes": [
            "RemoteLLMChatNode"
          ]
        },
        "components": [
          {
            "id": "RemoteLLMChatNode_llm_component",
            "type": "LLMInterface",
            "creation_config": {
              "type": "RemoteLLMConfig",
              "properties": {
                "provider": "openai", 
                "model_name": "gpt-4.1-mini", 
                "provider": "anthropic", 
                "model_name": "claude-4-sonnet", 
                "default_config": {},
                "api_key": "{{INWORLD_API_KEY}}"
              }
            }
          }
        ]
      }
      
    • Option 2: For more involved changes, we recommend creating your graph in your IDE and export it to a JSON file
     const llm = new RemoteLLMChatNode({
       id: "llm",
       provider: "anthropic",
       modelName: "claude-4-sonnet",
     });
    
     const graph = new GraphBuilder({
       id: 'my-variant-graph',
       apiKey: process.env.INWORLD_API_KEY,
       enableRemoteConfig: true, 
     })
       .addNode(llmNode)
       .setStartNode(llmNode)
       .setEndNode(llmNode)
       .build();
    
     // Export to JSON file 
     import * as fs from 'fs';
     fs.writeFileSync('variant_a.json', graph.toJSON());
    
  2. Upload variant to Portal:
    • Navigate to the Experiments tab and click on your graph record
    • Click Create Variant
    • Enter a descriptive name for the variant and upload the JSON file from Step 1 or 2
    • Click Create to save the variant
create-variant.gif

Create Targeting Rules

Now that you have variants, you need to define targeting rules that control which users experience which variants. Each rule contains two key parts:
  • Targeting: Filters that define which users qualify for this rule based on the user attributes (e.g., country = "Canada" or user_tier = "premium")
  • Rollout: Traffic distribution between variants (e.g., 50% to “Control”, 30% to “Variant A”, 20% to “Variant B”)
Targeting rules are evaluated as follows:
  • Rules are processed from top to bottom
  • Users are assigned to the first rule they satisfy
  • Users who don’t match any rules get whatever variant specified in the “Everyone else” catch-all rule at the bottom (this is called the default variant). Make sure to choose a variant for the “Everyone else” rule so you have a fallback!
Order matters! Put your most specific rules at the top and broader rules toward the bottom.
To create your first rule:
  1. In Portal, navigate to the Experiments tab. Click on the graph record, and select 2. Targeting & Rollout. Click + Rule. A targeting configuration panel will appear with two columns.
  2. Set up filters on user attributes
    • Click Add Filters
    • Enter an attribute name (e.g., user_tier, location, app_version, etc.)
    • Select an operator from the dropdown (e.g., IN, NOT_IN, NUM_EQUAL, etc.)
    • Enter the attribute values
  3. Set up the traffic distribution between variants (percentages must add up to 100%)
  4. Click Save in the top right corner
IMPORTANT: If you want traffic allocation to work as intended, when executing the graph, you MUST specify a targeting key within UserContext. If the targeting key is unspecified, all users will receive the same variant regardless of the traffic allocation you set.

Launch Your Experiment

Your targeting rules are disabled by default to prevent accidental traffic serving. When you’re ready to start your experiment:
  1. Click the on the top right of the rule
  2. Select Enable
  3. Click Save in the top right corner
Your experiment is now live! Users will be automatically assigned to variants based on your targeting rules and traffic distribution settings.

Deploy Winning Variants

Once you’ve identified a winning variant from your experiment results, you can deploy it to all users:
  1. Navigate to the “Everyone else” rule at the bottom
  2. Set 100% traffic to your winning variant
  3. Delete or disable all other rules above this one
If you want to do gradual rollouts, simply enter a smaller percentage like 10% to your winning variant and gradually increase over time.

Manage Your Experiments

Once your experiments are running, you can manage them using the following operations. Disable Rules: Stop an Experiment To stop your experiment and return users to the default variant:
  1. Click the on the top right of the rule
  2. Select Disable
  3. Click Save in the top right corner
Delete Rules To clean up old experiments:
  1. Click the on the top right of the rule
  2. Select Delete
  3. Click Save in the top right corner
The default rule cannot be disabled or deleted, ensuring all users always receive some variant.
Reorder Rules Rules are evaluated top to bottom. Users match the first rule they qualify for. To change rule priority:
  1. Hover over the on the top left of the rule
  2. Drag and drop to reorder
Delete Variants You can delete variants that are no longer needed:
  1. Find the variant you want to delete in your graph’s variant list, and click on the on the right
  2. Click Delete
Variants currently receiving traffic cannot be deleted. To enable deletion, ensure:
  • the variant’s traffic allocation is 0% for all rules AND
  • the variant is NOT the default variant in the last targeting rule
delete-variant.gif

How it Works

When your graph executes, the runtime determines whether to use your local graph configuration or a remote variant based on two key factors:
  1. Does the graph have remote config enabled? (Default = false)
  2. If yes, is the graph registered on Experiments and does it have variants to return?
By default, graphs have remote config disabled. Set enableRemoteConfig to true to use the remote Experiments variants:
Node
const graph = new GraphBuilder({
  id: 'my-graph-id',
  apiKey: process.env.INWORLD_API_KEY,
  enableRemoteConfig: true, // Set to true to enables use of remote Experiments variants
})
  .addNode(llmNode)
  .setStartNode(llmNode)
  .setEndNode(llmNode)
  .build();

Decision Flow

Execution with Remote Config

If remote config is enabled, the runtime performs these steps at execution:
  1. Check Local Cache: Runtime checks if it has a cached compiled graph for this user and graph ID
    • Cache Hit: Execute immediately (~microseconds)
    • Cache Miss: Proceed to Step 2
  2. Fetch Remote Variant: Query Experiments service with user context to get variant configuration
    • Fetch Success: Proceed to Step 3
    • Fetch Failure: Fall back to local graph configuration and execute
  3. Compile and Cache: Runtime compiles the fetched variant, caches the compiled graph, then executes (~milliseconds)
Note: The cache stores compiled, executable graph objects (not just JSON configurations) to avoid the expensive compilation step on subsequent requests.

FAQ

How can I tell if my graph is using remote config or static config?

Check your GraphBuilder configuration of enableRemoteConfig. If it’s not specified, it defaults to false (will use local configuration).
Node
const graph = new GraphBuilder({
  id: 'my-graph-id',
  apiKey: process.env.INWORLD_API_KEY,
  enableRemoteConfig: true, // Set to true to enables use of remote Experiments variants
})
You can also check your application logs for Experiments-related messages during graph execution.

How do I know if my Experiments are working?

Verify your Experiments are working by:
  • Check that enableRemoteConfig: true is set in your graph configuration
  • Confirm your graph ID is registered in Experiments
  • Review your targeting rules to ensure they are enabled and configured correctly
  • Pass different user attributes when executing the graph to test that variant selection is working
  • Monitor application logs and look for any Experiments-related error messages
If your graph always uses the local configuration, check the troubleshooting section below.

Why is my graph always using the local configuration instead of Experiments variants?

Check these common causes:
  1. Missing API key — Graph won’t start without INWORLD_API_KEY environment variable
  2. Explicitly disabled — You have enableRemoteConfig: false in your GraphBuilder constructor
  3. Graph not registered — Your graph ID is not registered in Experiments
  4. No active targeting rules — All targeting rules are disabled or no variants are configured

What variants can I upload to Experiments without having to redeploy code

Supported Changes:
  • Model switching: Change LLM providers (GPT-5, Mistral, Claude) or TTS/STT models
  • Configuration updates: Modify temperature, token limits, prompts, and other node parameters
  • Graph topology: Add/remove/reorder built-in nodes while keeping same input/output interface
  • Processing logic: Update edge conditions, add preprocessing steps, or modify data flow
Requires Code Deployment (If any 1 of these is true):
  1. New Component Types: Add/modify new component types
  2. New Node Types: Adding unregistered custom nodes (Note: updating a custom node’s configuration IS allowed as long as the custom node is already registered)
  3. Break Interface Preservation: Changing graph input/output interface
  4. Custom Edge Conditions: Use edge conditions beyond standard CEL expressions
Key Rule: Experiments only supports variants using already registered components that preserve your graph’s input/output interface.

Next Steps