Overview
The LLM node, powered by the UInworldNode_LLM class, provides a high-level interface that integrates LLM clients to generate text responses within your graph. It works with Chat Request and Chat Response data to enable conversational AI capabilities. The system abstracts away backend complexity, exposing a consistent API across models and providers for:- Chat-based text generation with message history
- Configurable generation parameters (token limits, temperature, etc.)
- Streaming and non-streaming response modes
- Integration with Chat Request/Response workflow
Working with the LLM node
To add a LLM node to your graph (or create a graph with just an LLM node) in the Graph Editor:- Right click to add the LLM node to the graph editor from the available node library
 
- In the node’s details panel:
- Under LLM Model, select the desired model. If your desired model is not in the dropdown, you can configure additional models by following the instructions here
- Adjust the Text Generation Configproperty to set the desired text generation parameters, such as token limits and temperature.
- Leave Streamchecked if you want to stream text token outputs or uncheckedStreamto receive the complete text output. 
 
- Under 
- Connect the input of the LLM node to a LLMChatRequest data source, typically a custom node. The Chat Request corresponds to the prompt, messages, and configuration that will be provided to the LLM.
- If this is the first node in your graph, make sure to mark the node as the start node by right clicking on it and selecting “Set As Start”.
 
- Configure the LLM node output:
- If this is the final node in your graph, mark it as an end node by right-clicking and selecting “Set As End”
- Otherwise, connect the LLM Chat Response output to other nodes that process FInworldData_LLMChatResponse
- The node outputs a complete Chat Response containing generated text and metadata
 
 
- Save and run your graph!
Creating Chat Requests
To generate a Chat Request to be provided as input to the LLM node:- 
Create a custom node in the graph editor by selecting the “New Custom Node” button at the top left of the graph editor. Give the node a name, and save.
 
- 
After saving, the custom node’s blueprint. In the blueprint, create a new function prefixed with “Process” (e.g. “Process_Default”).
 
- 
In the function’s Details panel add an Output of type Inworld Data LLM Chat Request. 
- 
Right click in the function blueprint and search for “Make InworldData_LLMChatRequest”. Select it.
 
- 
Construct your chat request. To construct a simple prompt that only contains a single user message:
 - Drag the output of the Make InworldData_LLMChatRequest node to the return node of the function.
- From the Chat Messages input of the Make InworldData_LLMChatRequest node, drag and select Make Array.
- From the Make Array node’s input, drag and select Make InworldLLMMessage.
- In the Role parameter, select User. In the Content parameter, type in your desired prompt.
 
- 
This custom node can now be added to the graph, select your new node from the context menu and drag this node’s output to the input of the LLM node.
  
UInworldNode_LLM Class
The core LLM node that processes chat messages and generates text responses using configured language models.Chat Request and Response Data Flow
The LLM node operates on a simple input/output model using structured chat data:Input: FInworldData_LLMChatRequest
Contains the conversation context and response format preferences:- Chat Messages: Array of FInworldLLMMessagewith role (System/User/Assistant) and content
- Response Format: Desired LLM response format (TEXT, JSON, or JSON with schema)
ExecutionConfig property, not in the request data.
Output: FInworldData_LLMChatResponse
Contains the generated response with streaming support:- Content: The LLM’s generated response text
- Is Streaming: Boolean indicating whether this response is part of a streaming response
- Stream Support: Inherits from FInworldData_Streamallowing iteration through response chunks
API Reference
UInworldNode_LLM Methods
Constructor
- Description: Default constructor for the LLM node
- Usage: Initializes the node with default settings for Large Language Model processing
CreateNative
- Description: Native utility function to create a new LLM node instance with specified configuration
- Parameters:
- Outer: The outer object that will own this node
- NodeName: The name to assign to the node
- InExecutionConfig: Execution configuration settings for text generation
 
- Return Value: Newly created LLM node instance
UInworldNode_LLM Properties
ExecutionConfig
- Type: FInworldLLMChatNodeExecutionConfig
- Category: Inworld
- Description: Configuration settings for LLM execution including token limits, temperature, and other generation parameters
- Usage: Configure in Blueprint editor to control text generation behavior