Skip to main content
This template demonstrates how to use Custom Nodes with more than one input in a graph. This is possible because you can pass an arbitrary number of inputs to a Custom Node’s process() method. In our example, we’ll ask two LLMs to write a poem, and then pass both outputs to a third LLM to choose which poem is better. This results in a graph which can be visualized like this:
Architecture
  • Backend: Inworld Runtime
  • Frontend: N/A (CLI example)

Getting Started

  1. Download and unzip the Getting Started Project
  2. In that project folder, run npm install to download the Inworld Runtime and other dependencies
  3. Create an .env file and add your Base64 Runtime API key:
.env
INWORLD_API_KEY=<your_api_key>
  1. Create a new file called multi-input.ts and paste in the following code:
multi-input.ts
import 'dotenv/config';

import {
  CustomNode,
  GraphBuilder,
  GraphTypes,
  ProcessContext,
  RemoteLLMChatNode
} from '@inworld/runtime/graph';

let poemPrompt = "Return ONLY a limerick about: "
let reviewPrompt = `Review these two poems and analyze which one is better.`

// Define a custom node which turns a prompt into messages for an LLM
class PoemPromptNode extends CustomNode {
    process(context: ProcessContext, input: string): GraphTypes.LLMChatRequest {
        let composedPrompt = poemPrompt + input
        return new GraphTypes.LLMChatRequest({
            messages: [
                {
                    role: 'user',
                    content: composedPrompt,
                },
            ]
        });
    }
}

class ReviewPromptNode extends CustomNode {
    process(context: ProcessContext, poem1: GraphTypes.Content, poem2: GraphTypes.Content): GraphTypes.LLMChatRequest {
        let composedPrompt = `${reviewPrompt}\n\nPoem 1:\n\n${poem1.content}\n\nPoem 2:\n\n${poem2.content}`
        return new GraphTypes.LLMChatRequest({
            messages: [
                {
                    role: 'user',
                    content: composedPrompt,
                },
            ]
        });
    }
}

let reviewPromptNode = new ReviewPromptNode({
    id: 'review-prompt-node',
});
let poemPromptNode = new PoemPromptNode({
    id: 'poem-prompt-node',
});
let openaiLLMNode = new RemoteLLMChatNode({
    id: 'openai-llm-node',
    modelName: 'gpt-4o-mini',
    provider: 'openai',
    textGenerationConfig: {
        maxNewTokens: 1000,
    },
    reportToClient: true
});

let anthropicLLMNode = new RemoteLLMChatNode({
    id: 'anthropic-llm-node',
    modelName: 'claude-3-5-haiku-latest',   
    provider: 'anthropic',
    textGenerationConfig: {
        maxNewTokens: 1000,
    },
    reportToClient: true
});

let googleLLMNode = new RemoteLLMChatNode({
    id: 'google-llm-node',
    modelName: 'gemini-2.0-flash',  
    provider: 'google',
    textGenerationConfig: {
        maxNewTokens: 1000,
    },
    reportToClient: true
});

// Creating a graph builder instance and adding the node to it
const graphBuilder = new GraphBuilder({
  id: 'custom-text-node',
  apiKey: process.env.INWORLD_API_KEY,
  enableRemoteConfig: false
})
  .addNode(poemPromptNode)
  .addNode(reviewPromptNode)
  .addNode(openaiLLMNode)
  .addNode(anthropicLLMNode)
  .addNode(googleLLMNode)
  .addEdge(poemPromptNode, openaiLLMNode)
  .addEdge(poemPromptNode, anthropicLLMNode)
  .addEdge(anthropicLLMNode, reviewPromptNode)
  .addEdge(openaiLLMNode, reviewPromptNode)
  .addEdge(reviewPromptNode, googleLLMNode)
  .setStartNode(poemPromptNode)
  .setEndNode(googleLLMNode);

// Creating an executor instance from the graph builder
const executor = graphBuilder.build();
executor.visualize('graph.png')

main();

// Main function that executes the graph
async function main() {
  // Execute graph and waiting for output stream to be returned.
  const { outputStream } = executor.start('pizza');
  for await (const event of outputStream) {
    await event.processResponse({
        Content: (data: GraphTypes.Content) => {
            console.log(`\n${data.content}\n`)
        },
    })
  }
}
  1. Run npx ts-node multi-input.ts to run the graph, observing both the poems and the review