AI Agent
Description
Section titled “Description”This module is the main node for managing conversations with artificial intelligence. Upon receiving a user message, the module:
- Obtains or creates an agent session linked to the user (by
external_idorsession_id). - Cancels pending follow-ups when the user responds.
- Adds the user’s message to the session history.
- Prepares tools (workflows, datastores, APIs) if enabled.
- Calls the selected AI provider (OpenAI, Anthropic, Google, DeepSeek, Qwen, or Llama/Groq) with the history, system prompt, and configured personality.
- If the LLM requests tool execution (tool calls), it executes them and calls the LLM again with the results to generate the final response.
- Saves the agent’s response in the session and updates metrics (tokens, times, counters).
- Extracts entities from the conversation and stores them in persistent memory.
- Schedules automatic follow-ups if there are configured flows.
- Optionally simulates a human delay before returning the response.
If an error occurs and fallback is enabled, it returns a configurable error message instead of failing the entire flow.
Configuration
Section titled “Configuration”| Parameter | Type | Required | Description |
|---|---|---|---|
| agent_id | text | Yes | Unique agent identifier. Used to link sessions and configurations. |
| brain_provider | select | Yes | AI provider the agent will use as its brain. Options: openai, anthropic, google, deepseek, qwen, llama. |
| brain_model | aiModelSelector | Yes | AI model to use. Models are loaded based on the selected provider. Default: gpt-4o. |
| credentials_key | credentials | Yes | AI provider credentials. Supported types: openai, anthropic, google_ai, deepseek, qwen, llama. |
| system_prompt | textarea | No | Base instructions for the agent. Defines its personality, knowledge, and behavior. |
| personality | select | No | Agent’s communication style. Options: professional, friendly, formal, casual, empathetic, custom. Default: professional. |
| max_tokens | number | No | Token limit for the agent’s response. Min: 50, Max: 4000. Default: 1000. |
| temperature | number | No | Response creativity (0=deterministic, 2=very creative). Default: 0.7. |
| enable_memory | boolean | No | Maintain conversation memory between messages. Default: true. |
| memory_window | number | No | Number of messages to keep in context. Min: 5, Max: 100. Default: 20. |
| enable_tools | boolean | No | Allow the agent to execute tools (workflows, datastores, APIs). Default: false. |
| tools | agentTools | No | Tools available to the agent. Only visible if enable_tools is true. |
| human_delay_enabled | boolean | No | Add small delays to simulate human typing. Default: true. |
| human_delay_min_ms | number | No | Minimum delay in milliseconds. Default: 500. |
| human_delay_max_ms | number | No | Maximum delay in milliseconds. Default: 3000. |
| error_message | textarea | No | Message to display when an error occurs. |
| error_fallback_enabled | boolean | No | Continue the flow with an error message instead of failing completely. Default: true. |
Credentials
Section titled “Credentials”You need to configure credentials_key with the selected AI provider’s credentials. Supported credential types are: openai, anthropic, google_ai, deepseek, qwen, llama. Credentials are obtained from the client_credentials table by credential_key.
Output
Section titled “Output”{ "success": true, "response": "Texto de respuesta del agente", "session_id": "uuid-de-la-sesion", "session_status": "active", "message_count": 12, "tokens_used": { "prompt": 350, "completion": 120, "total": 470 }, "tool_calls": [ { "name": "buscar_producto", "success": true } ], "execution_time_ms": 2340, "memory": { "entities": { "nombre": "Juan", "email": "juan@ejemplo.com" } }}Usage Example
Section titled “Usage Example”Basic case
Section titled “Basic case”{ "agent_id": "agente-ventas-01", "brain_provider": "openai", "brain_model": "gpt-4o", "credentials_key": "mi-clave-openai", "system_prompt": "Eres un asistente de ventas amable y profesional.", "personality": "friendly", "max_tokens": 1000, "temperature": 0.7, "enable_memory": true, "memory_window": 20}Expected input data
Section titled “Expected input data”The module expects to receive in inputData:
message/text/content: The user’s message (required).external_id/user_id/from: User’s external identifier.channel: Communication channel (default: “workflow”).session_id: Existing session ID (optional).
API Used
Section titled “API Used”Depends on the selected provider:
- OpenAI: Chat Completions API.
- Anthropic: Claude Messages API.
- Google: Gemini generateContent API.
- DeepSeek / Qwen / Llama (Groq): OpenAI-compatible APIs.
- The agent manages persistent sessions in the database with a complete message history.
- The memory window (
memory_window) controls how many messages are sent as context to the LLM. - Tools are executed sequentially; if any fails, the error result is included in the context.
- Human delay is calculated proportionally to the response length.
- Follow-ups are automatically scheduled if there are configured flows for the agent.
- When error fallback is enabled, the flow continues with
success: falseand the configured error message.
Related Nodes
Section titled “Related Nodes”- AgentDecision - Agent Decision
- AgentSendFollowUp - Send Follow-Up
- agentChat - Conversational AI Agent