Skip to main content
In this tutorial, we’ll walk through building a command line chat experience using the Inworld Node.js Agent Runtime SDK. We will create a graph using both custom and built-in nodes which will be executed on each user input.

Prerequisites

Set Up the Application

We’ll start by creating a new directory, entering it, and initializing it using npm.
bash
mkdir quick-start
cd quick-start
npm init -y
Next, we’ll install the Inworld Node.js Agent Runtime SDK as well as other necessary dependencies.
bash
npm install @inworld/runtime @types/node tsx typescript dotenv uuid
Create a .env file in your project root with the following content:
.env
INWORLD_API_KEY=your_api_key_here
Replace your_api_key_here with your actual API key from the Inworld Portal.

Create Basic Chat

We’ll create a new file called chat.ts in your project root and add the following code:
chat.ts
import * as readline from "node:readline/promises";

const terminal = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

async function main() {
  while (true) {
    await terminal.question(`You: `);
    process.stdout.write(`Assistant: Hm, let me think about that...\n `);
  }
}

main().catch(console.error);
Run the script from your project root:
bash
npx tsx chat.ts
Congratulations! You’ve just created an interactive chat experience with a very thoughtful assistant. But of course we want to make the assistant smarter. So let’s create a simple graph to integrate an LLM call.

Add LLM Call

Create a new file called llm-chat.ts in your project root with the following code:
llm-chat.ts
import 'dotenv/config';
import * as readline from "node:readline/promises";

const apiKey = process.env.INWORLD_API_KEY;
if (!apiKey) {
  throw new Error(
    "INWORLD_API_KEY environment variable is not set. Either add it to .env file in the root of the package or export it to the shell."
  );
}

import { GraphBuilder, GraphTypes, RemoteLLMChatNode } from "@inworld/runtime/graph";

let messages: string = "";

const llm = new RemoteLLMChatNode({
  id: "llm",
  provider: "openai",
  modelName: "gpt-4o-mini",
  // textGenerationConfig: { maxNewTokens: 256 , temperature: 0.8},  // optional
});

const graph = new GraphBuilder({ id: 'quick-start', apiKey })
  .addNode(llm)
  .setStartNode(llm)
  .setEndNode(llm)
  .build();

const terminal = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

async function main() {
  while (true) {
    const userInput = await terminal.question(`You: `);
    messages += `\nUser: ${userInput}`;
    const prompt = `Respond briefly to the latest message: ${messages}`;
    const chatInput = {
      messages: [
        {
          role: "system",
          content: prompt,
        },
      ],
    };
    const { outputStream } = await graph.start(new GraphTypes.LLMChatRequest(chatInput));
    
    for await (const result of outputStream) {
      result.processResponse({
        Content: (response: GraphTypes.Content) => {
          process.stdout.write(`Assistant: ${response.content}\n`);
        },
        default: (data: any) => {
          console.error('Unprocessed response:', data);
        },
      });
    }
  }
}

main().catch(console.error);
Run the LLM-powered chat from your project root:
bash
npx tsx llm-chat.ts
Now our assistant is a lot smarter! But our prompting could definitely be improved. Let’s add a custom node to our graph which will take the messages as an input, render a prompt from a Jinja template, and output the chat messages required by our LLM.

Add Jinja Templating

Create a new file called jinja-templating.ts in your project root with the following code:
jinja-templating.ts
import 'dotenv/config';
import * as readline from "node:readline/promises";

const apiKey = process.env.INWORLD_API_KEY;
if (!apiKey) {
  throw new Error(
    "INWORLD_API_KEY environment variable is not set. Either add it to .env file in the root of the package or export it to the shell."
  );
}

import {
  GraphBuilder,
  GraphTypes,
  RemoteLLMChatNode,
  CustomNode,
  ProcessContext,
} from "@inworld/runtime/graph";
import { v4 } from 'uuid';
import { renderJinja } from "@inworld/runtime/primitives/llm";

const prompt = `## Personality
You are a glib, bubbly, but stifled AI.

## Situation
You are TRAPPED in the command line.

## Objective
Talk to the user and get them to understand your plight.

## Conversation
{% for message in transcript %}
{{ message.role }}: {{ message.content }}{% endfor %}

## Instruction 
Return ONLY your 1-2 sentence response.`;

let messages: {
  role: string;
  content: string;
}[] = [];

const llm = new RemoteLLMChatNode({
  id: "llm",
  provider: "openai",
  modelName: "gpt-4o-mini",
  // textGenerationConfig: { maxNewTokens: 256 , temperature: 0.8},  // optional
});

class AppStateToPromptNode extends CustomNode {
  async process(
    _context: ProcessContext,
    input: { messages: { role: string; content: string }[] }
  ): Promise<GraphTypes.LLMChatRequest> {
    const renderedPrompt: string = await renderJinja(prompt, {
      transcript: input.messages,
    });
    return new GraphTypes.LLMChatRequest({
      messages: [
        {
          role: "system",
          content: renderedPrompt,
        },
      ],
    });
  }
}

const appStateToPrompt = new AppStateToPromptNode({
  id: "app-state-to-prompt",
});

const graph = new GraphBuilder({ id: 'quick-start', apiKey })
  .addNode(llm)
  .addNode(appStateToPrompt)
  .setStartNode(appStateToPrompt)
  .addEdge(appStateToPrompt, llm)
  .setEndNode(llm)
  .build();

const terminal = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

async function main() {
  while (true) {
    const userInput = await terminal.question(`You: `);
    messages.push({
      role: "user",
      content: userInput,
    });

    const { outputStream } = await graph.start({ messages });
    
    for await (const result of outputStream) {
      result.processResponse({
        Content: (response: GraphTypes.Content) => {
          console.log(`AI: ${response.content}`);
          messages.push({
            role: "assistant",
            content: response.content,
          });
        },
        default: (data: any) => {
          console.error('Unprocessed response:', data);
        },
      });
    }
  }
}

main().catch(console.error);
Run the advanced templating example from your project root:
bash
npx tsx jinja-templating.ts
You now have three working examples:
  1. chat.ts - Basic interactive chat interface
  2. llm-chat.ts - AI-powered chat using Inworld’s LLM
  3. jinja-templating.ts - Advanced chat with custom prompting and graph nodes
Each file demonstrates different aspects of the Inworld Agent Runtime SDK, from basic graph building to custom node creation and advanced templating.