Skip to main content
Large Language Models (LLMs) are a key component for buiding AI powered experiences. They can power capabilities like dialog generation, game state changes, intent detection, and more.

Overview

The LLM node, powered by the UInworldNode_LLM class, provides a high-level interface that integrates LLM clients to generate text responses within your graph. It works with Chat Request and Chat Response data to enable conversational AI capabilities. The system abstracts away backend complexity, exposing a consistent API across models and providers for:
  • Chat-based text generation with message history
  • Configurable generation parameters (token limits, temperature, etc.)
  • Streaming and non-streaming response modes
  • Integration with Chat Request/Response workflow

Working with the LLM node

To add a LLM node to your graph (or create a graph with just an LLM node) in the Graph Editor:
  1. Right click to add the LLM node to the graph editor from the available node library
  2. In the node’s details panel:
    • Under LLM Model, select the desired model. If your desired model is not in the dropdown, you can configure additional models by following the instructions here
    • Adjust the Text Generation Config property to set the desired text generation parameters, such as token limits and temperature.
    • Leave Stream checked if you want to stream text token outputs or unchecked Stream to receive the complete text output.
  3. Connect the input of the LLM node to a LLMChatRequest data source, typically a custom node. The Chat Request corresponds to the prompt, messages, and configuration that will be provided to the LLM.
    • If this is the first node in your graph, make sure to mark the node as the start node by right clicking on it and selecting “Set As Start”.
  4. Configure the LLM node output:
    • If this is the final node in your graph, mark it as an end node by right-clicking and selecting “Set As End”
    • Otherwise, connect the LLM Chat Response output to other nodes that process FInworldData_LLMChatResponse
    • The node outputs a complete Chat Response containing generated text and metadata
  5. Save and run your graph!

Creating Chat Requests

To generate a Chat Request to be provided as input to the LLM node:
  1. Create a custom node in the graph editor by selecting the “New Custom Node” button at the top left of the graph editor. Give the node a name, and save.
  2. After saving, the custom node’s blueprint. In the blueprint, create a new function prefixed with “Process” (e.g. “Process_Default”).
  3. In the function’s Details panel add an Output of type Inworld Data LLM Chat Request.
  4. Right click in the function blueprint and search for “Make InworldData_LLMChatRequest”. Select it.
  5. Construct your chat request. To construct a simple prompt that only contains a single user message:
    • Drag the output of the Make InworldData_LLMChatRequest node to the return node of the function.
    • From the Chat Messages input of the Make InworldData_LLMChatRequest node, drag and select Make Array.
    • From the Make Array node’s input, drag and select Make InworldLLMMessage.
    • In the Role parameter, select User. In the Content parameter, type in your desired prompt.
    If you want to add a system message, or include multiple messages in your prompt, you can add additional elements to the array.
  6. This custom node can now be added to the graph, select your new node from the context menu and drag this node’s output to the input of the LLM node.

UInworldNode_LLM Class

The core LLM node that processes chat messages and generates text responses using configured language models.
/**
 * @class UInworldNode_LLM
 * @brief A workflow node that processes chat messages and produces either
 * complete text output or a stream of text tokens based on the stream
 * parameter.
 *
 * UInworldNode_LLM encapsulates the functionality of an LLM client to generate text
 * responses within a workflow graph. It can be configured to output complete
 * text or stream tokens as they are generated.
 *
 * @input FInworldData_LLMChatRequest
 * @output FInworldData_LLMChatResponse
 */
UCLASS(Blueprintable, BlueprintType)
class INWORLDRUNTIME_API UInworldNode_LLM : public UInworldNode
{
	GENERATED_BODY()

public:
	UInworldNode_LLM();

	/**
	 * @brief Native utility function. Creates a new LLM node instance with the specified configuration
	 * @param Outer The outer object that will own this node
	 * @param NodeName The name to assign to the node
	 * @param InExecutionConfig Execution configuration settings for text generation
	 * @return Newly created LLM node instance
	 */
	static UInworldNode_LLM* CreateNative(
		class UObject* Outer, const FString& NodeName, const FInworldLLMChatNodeExecutionConfig& InExecutionConfig);

	/**
	 * Used to define and manage parameters such as token limits, randomness,
	 * and other options that influence the behavior of text generation. This
	 * configuration ensures fine-grained control over the output quality and
	 * style of generated text by the Large Language Model.
	 */
	UPROPERTY(EditDefaultsOnly, BlueprintReadWrite, Category = "Inworld")
	FInworldLLMChatNodeExecutionConfig ExecutionConfig;

private:
	virtual EInworldRuntimeErrors GetJsonConfig(const TSharedRef<FJsonObject>& GraphJson) override;

	UFUNCTION()
	UPARAM(meta = (DisplayName = "Chat Response"))
	FInworldData_LLMChatResponse Process_LLM(const FInworldData_LLMChatRequest& ChatRequest)
	{
		checkNoEntry();
		return {};
	}
};

Chat Request and Response Data Flow

The LLM node operates on a simple input/output model using structured chat data:

Input: FInworldData_LLMChatRequest

Contains the conversation context and response format preferences:
  • Chat Messages: Array of FInworldLLMMessage with role (System/User/Assistant) and content
  • Response Format: Desired LLM response format (TEXT, JSON, or JSON with schema)
Note: Generation parameters like token limits and temperature are configured in the node’s ExecutionConfig property, not in the request data.

Output: FInworldData_LLMChatResponse

Contains the generated response with streaming support:
  • Content: The LLM’s generated response text
  • Is Streaming: Boolean indicating whether this response is part of a streaming response
  • Stream Support: Inherits from FInworldData_Stream allowing iteration through response chunks

API Reference

UInworldNode_LLM Methods

Constructor

UInworldNode_LLM()
  • Description: Default constructor for the LLM node
  • Usage: Initializes the node with default settings for Large Language Model processing

CreateNative

static UInworldNode_LLM* CreateNative(
    class UObject* Outer, const FString& NodeName, const FInworldLLMChatNodeExecutionConfig& InExecutionConfig);
  • Description: Native utility function to create a new LLM node instance with specified configuration
  • Parameters:
    • Outer: The outer object that will own this node
    • NodeName: The name to assign to the node
    • InExecutionConfig: Execution configuration settings for text generation
  • Return Value: Newly created LLM node instance

UInworldNode_LLM Properties

ExecutionConfig

  • Type: FInworldLLMChatNodeExecutionConfig
  • Category: Inworld
  • Description: Configuration settings for LLM execution including token limits, temperature, and other generation parameters
  • Usage: Configure in Blueprint editor to control text generation behavior