Merge
Description
Section titled “Description”The Merge module collects and unifies data from multiple chunks processed in parallel (for example, from an Iterator). It waits until all chunks of an execution (identified by workflowRunId) have been processed and stored in the database. When the last chunk is received, it combines all data into a unified array (mergedResult) and cleans up temporary records. If chunks are still missing, it returns a waiting message without continuing the flow. It is the natural complement to the Iterator for recombining results.
Configuration
Section titled “Configuration”Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
| (no configurable parameters) | - | - | This module does not require additional configuration. It operates automatically based on the _meta_ metadata from the data. |
Output
Section titled “Output”{ "nextModule": "siguiente_modulo", "data": { "mergedResult": [ { "item1": "datos_procesados_1" }, { "item2": "datos_procesados_2" }, { "item3": "datos_procesados_3" } ] }}Usage Example
Section titled “Usage Example”Basic case
Section titled “Basic case”{ "label": "merge"}- Requires input data to contain
_meta_.workflowRunIdand_meta_.totalChunksto function correctly. - Chunks are stored in the
merge_chunksdatabase table. - If not all chunks have been received, it returns
nextModule: nullwith a progress message. - Once the merge is complete, temporary records are deleted from the
merge_chunkstable. - Data is combined into an ordered array within
mergedResult. - It is the complement to the Iterator module: the Iterator divides, the Merge recombines.
Related Nodes
Section titled “Related Nodes”- Iterator - splits an array into chunks for parallel processing
- MergeResults - advanced merge with support for iteration levels
- MergeMultiple - merge of multiple flow branches