Skip to main content
This quickstart guide will walk through how to use the Inworld CLI to set up a simple LLM to TTS conversational pipeline (powered by Runtime) in just a few minutes.

Prerequisites

Before you get started, please make sure you have the following installed:
  • MacOS (arm64)
  • Linux x64
  • Windows x64

Get Started

1

Install Inworld CLI

Install the Inworld CLI globally.
npm install -g @inworld/cli
2

Log in to your acount

Log in to your Inworld account to use Inworld Runtime. If you don’t have an account, you can create one when prompted to login.
inworld login
# You'll be prompted to login via your browser
Once logged in, your credentials are stored and you won’t need to log in again.
3

Create your first project

Initialize project setup.
inworld init 
# Follow the prompts to be guided through setup
You will be prompted to select one of the pre-built graph templates to populate your project. For the LLM to TTS conversational pipeline, select 1 when prompted.
> 🚀 Welcome to Inworld Graph Project Initializer!
> 
> 📋 Available Templates:
> └ 1. LLM + TTS Pipeline - A production-ready pipeline that processes user input through an LLM (GPT-4o-mini by default) and converts the response to speech using TTS. Includes streaming support for better user experience. (ID: llm_tts)
> └ 2. Simple Custom Node - A basic example demonstrating how to create a custom node that processes text input. Perfect for learning the basics of the Inworld Runtime graph system. (ID: simple)
> 
> Select a template (enter number or name): 
Populate a project name, which will be the name of your project directory
> Project name (llm-tts-graph):
Enter y when asked about installing dependencies
> 📦 Would you like to install dependencies now? (y/n)
After following the prompts, you’ll have a project directory created with all dependencies installed. You are now ready to use your graph!
4

Run your graph

Navigate to your project directory and run your pipeline with the appropriate inputs.
cd llm-tts-graph
inworld run ./graph.ts '{"input": {"user_input":"Hello!"}}'

Run a local server

Now that you’ve successfully run your first graph, you can run a local server to test it in your application.
1

Start the local server

Start your local server.
inworld serve ./graph.ts
You can see additional server configuration here (including support for gRPC and Swagger UI).
2

Test the API

Test the API with a simple curl command. Note that for the LLM to TTS pipeline, the API will return raw audio data that needs to be parsed in order to be played.
curl -X POST http://localhost:3000/v1/graph:start \
    -H "Content-Type: application/json" \
    -d '{"input": {"user_input":"Hello!"}}'
Here is an example of the output
{"executionStarted":{"executionId":"01999de9-8a75-75f8-a17b-7ec4c1b4490e","timestamp":"2025-10-01T03:55:52.309Z","variantName":"__default__"}}
{"ttsOutputChunk":{"text":"Hello!","audio":{"data":[0,0,0,0,0,0,0,0,0...],"sampleRate":48000}},"responseNumber":1}
{"ttsOutputChunk":{"text":" How can I assist you today?","audio":{"data":[0,0,0,0,0,0,0,0,0...],"sampleRate":48000}},"responseNumber":1}
{"executionCompleted":true}

Make your first change

Now let’s make our first modification to the LLM to TTS pipeline. Let’s change the model and prompt.
1

Modify graph.ts

Open up the graph.ts file in your project directory, which contains the graph configuration. Modify the provider and modelName under RemoteLLMChatNode to any supported LLM.
graph.ts
import {
  LLMChatRequestBuilderNode,
  RemoteLLMChatNode,
  RemoteTTSNode,
  SequentialGraphBuilder,
  TextChunkingNode,
} from '@inworld/runtime/graph';

const graphBuilder = new SequentialGraphBuilder({
  id: 'custom-text-node-llm',
  nodes: [
    new LLMChatRequestBuilderNode({
      messages: [
        { 
          role: 'system',
          content: { type: 'template', template: 'You are an extremely sarcastic assistant. Always respond with sarcasm.' },
        },
        {
          role: 'user',
          content: { type: 'template', template: '{{user_input}}' },
        },
      ],
    }),
    new RemoteLLMChatNode({
      provider: 'openai', 
      modelName: 'gpt-4o-mini',
      provider: 'google', 
      modelName: 'gemini-2.5-flash',
      stream: true,
    }),
    new TextChunkingNode(),
    new RemoteTTSNode(),
  ],
});

export const graph = graphBuilder.build();

2

Test the API

Test your updated graph.
inworld run ./graph.ts '{"input": {"user_input":"Hello!"}}'

Next Steps

Now that you’ve learned the basics, explore more advanced features:

Need Help?