Merge Datasets
Description
Section titled “Description”This module allows merging data from multiple workflow branches into a single consolidated object. It works as a convergence point where data from multiple predecessor nodes is expected before continuing. Partial data is stored in a temporary database table (tempmergedata) until all source nodes have sent their data. Once all sources have contributed, the module combines the results and passes them to the next node. It is essential in workflows with parallel branches that need to meet at a common point.
Configuration
Section titled “Configuration”Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
| sources | string | No | List of expected source nodes (automatically configured from predecessors). |
| mapschema | string | No | Optional schema to transform the merged data. |
Output
Section titled “Output”{ "nextModule": "siguiente_modulo", "data": { "nodo_api_1": { "resultado": "datos del nodo 1" }, "nodo_api_2": { "resultado": "datos del nodo 2" } }}- Partial data is persisted in the
tempmergedatadatabase table - The module only continues when ALL predecessor nodes have sent their data
- If not all data has arrived, it returns
nextModule: null(pauses the flow) - Metadata (
_meta_) and thesourceproperty are removed from each payload before storing - The merge key is built as
merge_{workflowId}_{node_name_alias} - Supports deep merge with array concatenation and recursive object merging
- Each partial data is stored using the source node name as key
Related Nodes
Section titled “Related Nodes”- iterador (parallel record processing)
- dataset (data generation for merge)
- dataTransform (transform data after merge)