Skip to main content
Safety checker node for content moderation and safety analysis. Analyzes text input for potentially harmful content across multiple topic categories and returns safety assessment results. You can either use a pre-configured safety checker component that could be reused across multiple nodes or node will create a new component for you. Input: String - The data type that SafetyCheckerNode accepts as input Output: GraphTypes.SafetyResult - The data type that SafetyCheckerNode outputs Example:
// Using component configuration
const safetyNode = new SafetyCheckerNode({
  id: 'content-safety-check',
  modelWeightsPath: '/models/safety_weights.json',
  safetyConfig: {
    forbiddenTopics: [
      { topicName: TopicName.Violence, threshold: 0.7 },
      { topicName: TopicName.Profanity, threshold: 0.8 }
    ]
  }
});

// Using existing safety checker component
const safetyComponent = new SafetyCheckerComponent({
  id: 'existing-safety-component',
  modelWeightsPath: '/models/safety_weights.json'
});
const safetyNodeWithComponent = new SafetyCheckerNode({
  id: 'content-safety-check',
  safetyCheckerComponent: safetyComponent
});

Constructor

new SafetyCheckerNode(
    props: SafetyCheckerNodeProps | SafetyCheckerNodeWithComponentProps,
): SafetyCheckerNode
Creates a new SafetyCheckerNode instance.

Parameters

props (SafetyCheckerNodeProps | SafetyCheckerNodeWithComponentProps) Configuration for the safety checker node.

Returns

SafetyCheckerNode

Remarks

The SafetyCheckerNode creates its own safety checker using the factory pattern and doesn’t need a separate SafetyCheckerComponent. Overrides AbstractNode.constructor