Skip to main content
Remote LLM chat node. You can either use a pre-configured LLM component that could be reused across multiple nodes or provide the LLM provider and model name and the node will create a new component for you. Input: GraphTypes.LLMChatRequest - The data type that LLMChatNode accepts as input Output: GraphTypes.ContentStream | GraphTypes.Content - The data type that LLMChatNode outputs Example:
// Using LLM provider configuration
const llmNode = new RemoteLLMChatNode({
  id: 'my-llm-node',
  provider: 'openai',
  modelName: 'gpt-4o-mini',
  stream: true
});

// Using existing LLM component
const llmNodeWithComponent = new RemoteLLMChatNode({
  id: 'my-llm-node',
  llmComponent: existingLLMComponent
});

// Using default settings
const defaultLlmNode = new RemoteLLMChatNode();

Constructor

new RemoteLLMChatNode(
    props?: RemoteLLMChatNodeProps | RemoteLLMChatNodeWithLLMComponentProps
): RemoteLLMChatNode
Creates a new RemoteLLMChatNode instance.

Parameters

props (RemoteLLMChatNodeProps | RemoteLLMChatNodeWithLLMComponentProps) = {} Optional configuration for the LLM chat node. Can specify either LLM provider settings to create a new internal component or pass llmComponent to reuse an existing one, but not both. If omitted, defaults are applied.

Returns

RemoteLLMChatNode Overrides AbstractNode.constructor