
Get Started with Dashboards
Enable Metrics Collection
To start using dashboards, you’ll need to configure telemetry in your application:- Node.js
- Unreal
- C++
Add the following code snippet to your application:
View Default Dashboard

| Panel | What it Shows | 
|---|---|
| Graph Executions Total | Count of graph executions | 
| Graph Executions Errors Total | Count of total graph execution errors | 
| P50 / P99 Graph Execution Latency | Percentile latency for the full graph execution | 
| Node Executions Total | Count of node executions across all nodes | 
| Node Executions Errors Total | Count of total node execution errors across all nodes | 
| P50 / P99 Node Execution Latency | Percentile latency for all node executions | 
| LLM Node: Output Tokens Rate | Number of LLM output tokens per unit of time | 
| P50 / P99 LLM Time to First Token Latency | Percentile latency for time to first token | 
Building Custom Panels
Ready to create your own custom panels? Here’s the step-by-step process:1
Create Panel
- Click on the dashboard you want to view
- Click New Panel on the top right corner
- Select a chart type: Time Series, Number, Table, Bar, or Pie Chart
2
Build Your Query
The visual query builder makes it simple - just choose a metric and an aggregator.


- Select a metric: What you want to measure (e.g., framework_executions_total)
- Pick an aggregator: How to calculate values (Count, Average, P99, etc.)
- Add filters (optional): Use WHERE conditions, group by dimensions, or set time aggregation
- Click Stage & Run Query

Query Builder Guide
Essential Fields
The core fields you’ll use for most charts:| Field | What it Does | Example | 
|---|---|---|
| Metric Name | The specific metric to display | framework_executions_total | 
| Aggregate Operator | How to calculate values | Count, Avg, P99, Sum | 
| WHERE | Filter your data | graph_id = "abc" | 
| Group by | Split data into separate lines | By service name, endpoint | 
Advanced Fields
For more complex queries and customization:| Field | What it Does | Example | 
|---|---|---|
| Limit | Maximum number of groups to show | 10 (top 10 services) | 
| HAVING | Filter groups after aggregation | GroupBy(operation) > 5 | 
| Order by | Sort the results | By value desc, by name | 
| Aggregate Every | Time resolution (seconds) | 60 = one point per minute | 
| Legend Format | Customize chart labels | {{service_name}} | 
Aggregate Operators

- NOOP: No operation - shows raw metric values without aggregation
- Count: Number of events or data points
- Count Distinct: Number of unique values
- Sum: Adds up all values within each time period (e.g., total requests per minute)
- Sum_increase: Shows increase in cumulative counters over time (e.g., how much a “total requests” counter grew)
- Avg: Average value across all data points
- Max: Highest value in the dataset
- Min: Lowest value in the dataset
- P50: 50th percentile (median)
- P75: 75th percentile (third quartile)
- P90: 90th percentile - only 10% of values are above this
- P95: 95th percentile - only 5% of values are above this
- P99: 99th percentile - only 1% of values are above this
- Sum_rate: Sum of individual rate calculations (e.g., total requests/sec across all services)
- Avg_rate: Average of individual rate calculations
- Max_rate: Maximum of individual rate calculations
- Min_rate: Minimum of individual rate calculations
Time Aggregation Settings

- Longer time ranges → Higher aggregation intervals (fewer data points, smoother charts)
- Shorter time ranges → Lower aggregation intervals (more data points, more detail)
| Selected Time Range | Aggregate Every (seconds) | Human Readable | 
|---|---|---|
| 30 minutes | 60 | 1 minute | 
| 60 minutes | 60 | 1 minute | 
| 1 hour | 60 | 1 minute | 
| 3 hours | 60 | 1 minute | 
| 6 hours | 60 | 1 minute | 
| 12 hours | 120 | 2 minutes | 
| 1 day | 300 | 5 minutes | 
| 3 days | 900 | 15 minutes | 
| 1 week | 1800 | 30 minutes | 
| 10 days | 3600 | 60 minutes | 
| 2 weeks | 3600 | 60 minutes | 
| 1 month | 9000 | 2h 30min | 
| 2 months | 18000 | 5 hours | 
- When: Sparse data + short time range (e.g., events every 5 minutes but aggregating every 1 minute)
- Result: Lots of empty intervals, choppy chart with gaps
- Fix: Increase Aggregate Every (e.g., change from 60 to 300 seconds)
- When: Frequent data + long time range (e.g., events every 30 seconds but aggregating every 5 hours)
- Result: Important spikes and patterns get smoothed out
- Fix: Decrease Aggregate Every (e.g., change from 18000 to 3600 seconds)