Skip to main content
In the Inworld Runtime, a graph is the core orchestration unit. Graphs contain nodes connected by edges, and manage the execution flow of data through the entire system.

Overview

There are two primary steps in setting up your experience using Inworld Graphs:
  1. Graph Creation: Creating a graph, adding nodes, connecting them with edges, defining start/end points, and building the graph
  2. Graph Execution: Process your data by providing input to the graph and capturing the results from your target nodes
The following sections provide more details about each step.

Graph Creation

Basic Graph

The simplest form of a graph is a single node graph that consists of:
  1. creating a node
  2. creating a graph builder
  3. adding the node to the graph
  4. defining that node as the start and end points for execution
  5. building the graph
import { GraphBuilder, RemoteLLMChatNode } from '@inworld/runtime/graph';

// Create an LLM chat node
const llmNode = new RemoteLLMChatNode({
  id: 'LLMChatNode',
  provider: 'openai',
  modelName: 'gpt-4o-mini',
  stream: false,
});

// Create and configure the graph
const graph = new GraphBuilder({
  id: 'MyBasicGraph',
  apiKey: process.env.INWORLD_API_KEY,
  enableRemoteConfig: false,
})
  .addNode(llmNode)
  .setStartNode(llmNode)
  .setEndNode(llmNode)
  .build();

Multi-Node Graph

Most graphs will consist of multiple nodes that can power more advanced functionality. The below example shows how to create a graph that outputs the LLM response as audio, using TTS.
import { GraphBuilder, RemoteLLMChatNode, TextChunkingNode, RemoteTTSNode } from '@inworld/runtime/graph';

// Create multiple nodes
const llmNode = new RemoteLLMChatNode({
  id: 'llm_node',
  provider: 'openai',
  modelName: 'gpt-4o-mini',
  stream: false,
});

const textChunkingNode = new TextChunkingNode({
  id: 'text_chunking_node',
});

const ttsNode = new RemoteTTSNode({
  id: 'tts_node',
  speakerId: 'Dennis',
  modelId: 'inworld-tts-1-max',
  sampleRate: 24000,
  temperature: 0.7,
  speakingRate: 1.0,
});

// Build the graph with multiple nodes and edges
const graph = new GraphBuilder({ id: 'llm-tts-graph', apiKey: process.env.INWORLD_API_KEY, enableRemoteConfig: false })
  .addNode(llmNode)
  .addNode(textChunkingNode) 
  .addNode(ttsNode)
  .addEdge(llmNode, textChunkingNode)
  .addEdge(textChunkingNode, ttsNode)
  .setStartNode(llmNode)
  .setEndNode(ttsNode)
  .build();

Graph with Multiple Start Nodes

Graphs require at least one start node and end node, but they can also have multiple start or end nodes. You can define this with setStartNodes() and setEndNodes().
import 'dotenv/config';

import {
  CustomNode,
  GraphBuilder,
  GraphTypes,
  ProcessContext,
  RemoteLLMChatNode
} from '@inworld/runtime/graph';

// Custom node that combines both LLM outputs
class CombineColorsNode extends CustomNode {
    process(context: ProcessContext, color1: GraphTypes.Content, color2: GraphTypes.Content): GraphTypes.Content {
        const output = `LLM 1 favorite color: ${color1.content}, LLM 2 favorite color: ${color2.content}`;
        console.log('\n' + output + '\n');
        return new GraphTypes.Content({ content: output });
    }
}

// Create LLM nodes and print node
const llm1 = new RemoteLLMChatNode({
    id: 'llm-1',
    modelName: 'gpt-4.1-nano',
    provider: 'openai'
});

const llm2 = new RemoteLLMChatNode({
    id: 'llm-2',
    modelName: 'gemini-2.0-flash',
    provider: 'google'
});

const combineNode = new CombineColorsNode({ id: 'combine-colors' });

// Build graph with two start nodes
const graphBuilder = new GraphBuilder({
  id: 'two-start-nodes-example',
  apiKey: process.env.INWORLD_API_KEY,
  enableRemoteConfig: false
})
  .addNode(llm1)
  .addNode(llm2)
  .addNode(combineNode)
  .addEdge(llm1, combineNode)
  .addEdge(llm2, combineNode)
  .setStartNodes([llm1, llm2])
  .setEndNodes([combineNode]);

const executor = graphBuilder.build();

// Execute the graph
async function main() {
  const input = new GraphTypes.LLMChatRequest({
    messages: [{
      role: 'user',
      content: 'What is your favorite color? Return just the color'
    }]
  });
  
  const { outputStream } = executor.start(input);
  for await (const event of outputStream) {
    await event.processResponse({
      Content: (data: GraphTypes.Content) => {}
    });
  }
}

main();

Graph Datastore

The datastore is a key-value storage mechanism that allows you to pass additional data across nodes in a graph execution. You can access the datastore through ProcessContext in any custom node’s process() method, and use add() to store data and get() to retrieve it. See this guide on using context for an example of how to use the datastore.

Graph Execution

After a graph has been created, it needs to be executed by providing input data and handling the results. The execution returns a stream that allows you to process the output from the graph. Below is an example of executing the graph we created above.
import { GraphTypes } from '@inworld/runtime/graph';

// Execute the graph
const { outputStream } = graph.start(new GraphTypes.LLMChatRequest({
  messages: [{ role: 'user', content: 'Hello, how are you?' }]
}));

// Handle response
for await (const result of outputStream) {
  await result.processResponse({
    Content: (response) => console.log('Response:', response.content),
    TTSOutputStream: async (ttsStream) => {
      for await (const chunk of ttsStream) {
        console.log('Audio chunk received');
      }
    },
  });
}