Skip to content

Traces, History, and Performance

Traces are the detailed record of everything that happens during a workflow execution. Each node generates traces with input, output, error, and timing information.

Each trace contains:

FieldDescription
workflow_idUnique execution identifier
workflow_nameWorkflow name
module_nameName of the node that generated the trace
node_idNode ID in the editor
stepDescription of the executed step
detailsDetailed data in JSON (input, output, metadata)
levelTrace level (see table below)
session_idSession ID to group traces from the same execution
timestampExact date and time
LevelUsageWhen it appears
TRACEDetailed execution flowInput and output of each node, variable resolutions
DEBUGDebugging informationIntermediate data, internal states
INFOGeneral informationExecution start/end, main results
WARNWarningsMissing optional data, retries, unexpected responses
ERRORErrorsNode failures, connection errors, exceptions

Traces are transmitted in real time via WebSocket to connected clients. This allows viewing a workflow execution live:

  1. The client connects to the WebSocket
  2. Sets filters with setFilters: specific workflow, event type, etc.
  3. Receives workflowEvent events with each generated trace
  4. Only receives traces matching their filters
  • /webhooks-dev/{hash}: Activates detailed traces (TRACE level). Use for debugging.
  • /webhooks/{hash}: Production mode with minimal traces. Use in real environments.
EndpointDescription
GET /api/tracesList traces with filters (workflow, module, level, dates)
GET /api/traces/:workflowIdTraces for a specific execution
GET /api/traces/workflow/:nameTraces by workflow name
GET /api/traces/module/:nameTraces for a specific module
GET /api/traces/levels/summarySummary by level (last 24h)
GET /api/traces/search/:termFull-text search in traces
GET /api/traces/statsStorage statistics

Traces are automatically cleaned up according to the client’s plan (retention days). They can also be cleaned up manually:

EndpointDescription
DELETE /api/traces/cleanupManual cleanup (specify days to retain)
DELETE /api/traces/cleanup/by-planCleanup according to plan limits
DELETE /api/traces/workflow/:idDelete traces for a specific workflow
DELETE /api/traces/clear-allDelete all client history

The history records each execution of each workflow with the state of each node: what data it received, what it returned, how long it took, and whether there were errors.

FieldDescription
workflow_idUnique execution hash (e.g.: wf_abc123)
module_nameName of the executed node
statusStatus: completed, failed, delayed
dataNode input data (JSON)
resultNode output data (JSON)
retriesNumber of retries performed
error_messageError message (if failed)
response_time_msExecution time in milliseconds
last_executionDate and time of the last execution
stepsJSON array with the accumulated step history

Each time a workflow executes, a unique instance is created in the workflowinstances table:

FieldDescription
workflow_idUnique instance hash (wf_xxx)
workflow_nameWorkflow name
workflow_parent_idTemplate workflow ID (to link multiple executions)
created_atWhen the execution started

This allows querying all executions of a specific workflow, comparing executions with each other, and detecting trends.

EndpointDescription
GET /api/workflow/stats/:idComplete workflow statistics
GET /api/workflow/stats/:id/nodesStatistics per node
GET /api/workflow/stats/:id/errorsError distribution
GET /api/workflow/stats/:id/sessionsLast 20 execution sessions
GET /api/workflow/stats/:id/timelineExecution timeline (for waterfall visualization)

For each execution session you can see:

  • Total execution duration
  • Number of traces generated
  • Whether there were errors and in which node
  • Final status (successful or failed)

The platform provides detailed metrics to analyze the performance of each workflow and each individual node.

When querying GET /api/workflow/stats/:id, you get:

{
"total_executions": 1250,
"success_rate": 97.5,
"error_rate": 2.5,
"total_errors": 31,
"avg_duration": 2340,
"nodes_used": ["Webhook", "HTTP", "Decision", "SendMail", "End"],
"node_stats": { ... },
"execution_trend": [ ... ]
}
MetricDescription
total_executionsTotal number of workflow executions
success_ratePercentage of successful executions
error_ratePercentage of executions with errors
total_errorsTotal number of failed executions
avg_durationAverage execution time in milliseconds
nodes_usedList of modules used

Each node has its own statistics:

MetricDescription
total_executionsTimes the node was executed
avg_durationAverage node time (ms)
max_durationMaximum recorded time (ms)
error_countNumber of node failures
success_rateNode success percentage

This allows identifying bottlenecks: if an HTTP node takes an average of 3 seconds while the rest take milliseconds, you know where to optimize.

The execution_trend field shows executions over the last 10 days, grouped by date:

[
{ "date": "2026-03-20", "executions": 45, "errors": 1 },
{ "date": "2026-03-21", "executions": 52, "errors": 0 },
{ "date": "2026-03-22", "executions": 48, "errors": 3 },
{ "date": "2026-03-23", "executions": 38, "errors": 0 }
]

Useful for detecting activity spikes, increases in error rates, or changes after a workflow modification.

GET /api/workflow/stats/:id/errors returns:

  • Error distribution by node (which nodes fail the most)
  • Error message grouping (repetitive errors)
  • Top 20 most frequent errors

This allows prioritizing fixes: if 80% of errors are “timeout” on an HTTP node, the solution is to adjust the timeout or improve the target API.

GET /api/workflow/stats/:id/timeline provides the chronological execution sequence of each node, ideal for visualizing:

  • The actual execution order
  • Nodes that execute in parallel
  • Where execution time is concentrated
  • Wait points (delays, merges)