Skip to content

Merge Datasets

This module allows merging data from multiple workflow branches into a single consolidated object. It works as a convergence point where data from multiple predecessor nodes is expected before continuing. Partial data is stored in a temporary database table (tempmergedata) until all source nodes have sent their data. Once all sources have contributed, the module combines the results and passes them to the next node. It is essential in workflows with parallel branches that need to meet at a common point.

ParameterTypeRequiredDescription
sourcesstringNoList of expected source nodes (automatically configured from predecessors).
mapschemastringNoOptional schema to transform the merged data.
{
"nextModule": "siguiente_modulo",
"data": {
"nodo_api_1": { "resultado": "datos del nodo 1" },
"nodo_api_2": { "resultado": "datos del nodo 2" }
}
}
  • Partial data is persisted in the tempmergedata database table
  • The module only continues when ALL predecessor nodes have sent their data
  • If not all data has arrived, it returns nextModule: null (pauses the flow)
  • Metadata (_meta_) and the source property are removed from each payload before storing
  • The merge key is built as merge_{workflowId}_{node_name_alias}
  • Supports deep merge with array concatenation and recursive object merging
  • Each partial data is stored using the source node name as key
  • iterador (parallel record processing)
  • dataset (data generation for merge)
  • dataTransform (transform data after merge)