- AI Component Library: A library of powerful AI components, such as LLMs, Text-to-Speech (TTS), and Speech-to-Text (STT), that can be constructed into Graphs to power conversational characters, interactive agents, and other advanced AI-driven experiences in your Unreal projects.
- Rich Observability: Dashboards, traces, and logs with no extra setup required. These enable you to debug, observe, and improve your AI interactions.
- Playgrounds: Quickly test different models and prompts before adding them to your experience.
Graphs
At the core of Inworld’s Runtime is a high-performance, C++ based graph execution engine. The engine executes Graphs organized from Nodes and Edges, where each node performs a specific processing task—often an AI operation such as language generation (LLM), speech-to-text (STT), or text-to-speech (TTS)—and edges define the flow of data between them. A graph:- Contains a collection of nodes
- Defines edges between nodes
- Must have at least one start node
- Must have at least one end node
- Supports both linear and non-linear execution paths
Nodes
Nodes are building blocks that perform a specific processing task, such as speech-to-text conversion, intent detection, or language model interaction. Built-in Nodes are provided with pre-built functionality for common use cases, with the option to create Custom Nodes to extend the runtime’s capabilities. Nodes:- Encapsulate ML models or transformations with standard interfaces
- Process input data and produce output data
- Include built-in telemetry to support performance monitoring and debugging capabilities
- Have built-in error handling
- Handle lifecycle management, including standardized initialization and cleanup on graph shutdown
Primitives
Many of the built-in nodes rely upon primitives: fundamental components like Large Language Models (LLMs), Text-to-Speech (TTS), and Text Embedders. These are the “raw ingredients” of any AI-powered application. Think of them as a library of high-performance AI modules, designed to abstract away the complexities of working with various providers, models, and hardware—allowing you to build on a consistent, provider-agnostic foundation. We recommend using primitives through our built-in nodes, but you can also leverage them directly in custom nodes. See this guide for more details about configuring primitives.Edges
Edges define the flow of data between nodes, creating a processing pipeline. The runtime supports sophisticated edge configurations including:- Conditions: Control data flow based on conditions
- Connection Types: Optional vs. required connections
- Loops: Iterative processing capabilities
Observability
Runtime provides rich observability tools with no extra setup required. You can monitor your AI interactions through:- Traces: Understand the flow of your application with detailed execution traces. Use them to identify latency bottlenecks and debug issues when they arise.
- Logs: Review historical data to monitor errors and debug issues.
- Dashboards: Get real-time visibility into your application health. Track performance, resource usage, and application KPIs through comprehensive dashboards and detailed data views.
Experiments
Runtime lets you iterate on prompts, models, and other configs (for example LLM and TTS) without redeploying code for already shipped builds. See the Experiments guide for detailed information on setting up and running A/B experiments.Playgrounds
Inworld Portal provides interactive Playgrounds that let you experiment with different models and tune prompts before deploying them in graph variants:- LLM Playground: Experiment with different language models, prompts, and response settings.
- TTS Playground: Try different models, voices, and clone your own voice.