Skip to content

Conversational AI Agent

This module implements a conversational agent based on Google Gemini with history persistence and intelligent memory. Its main features are:

  1. Persistent history: Stores the conversation history in the agent_chat table by chatId. When receiving a new message, it loads the existing history and sends it as context to the model.
  2. Memory with automatic extraction: Uses a configurable memory schema with regex patterns to automatically extract user information (name, email, phone, national ID, company, products of interest, location, etc.) from both the current message and the complete history.
  3. Image support: Can receive and analyze images sent by the user. Images are processed as base64 and sent along with the prompt to the Gemini model.
  4. Enriched context: Before each model call, it injects a categorized summary of the extracted memory as context.
  5. Intelligent truncation: If the history grows too large (>50000 chars), it is truncated keeping the system prompt and the last 10 messages.
  6. Retry with backoff: Implements retries with exponential backoff for 503 and 429 errors.
  7. Markdown escaping: The response is returned both as plain text and with escaped Markdown characters (useful for Telegram).

The module calls the Google Gemini REST API (generativelanguage.googleapis.com) directly.

ParameterTypeRequiredDescription
credentials_idcredentialsYesID of the credentials containing the Google Gemini apiKey.
system_prompttextNoSystem prompt that defines the agent’s personality and behavior. Supports dynamic variables.
prompttextNoAdditional prompt or question for the model. Supports dynamic variables.
memory_schemaobjectNoMemory field schema with regex patterns for automatic extraction.
modelselectNoGemini model to use. Default: gemini-2.0-flash.

credentials_id is required with an object containing apiKey (Google Gemini API key, obtainable from https://aistudio.google.com/app/apikey).

{
"nextModule": "siguiente-nodo",
"data": {
"chatId": "chat-123",
"reply": "Texto de respuesta del modelo",
"replyhtml": "Texto con Markdown escapado",
"history": [],
"memory": {
"nombre": "Juan",
"email": "juan@ejemplo.com",
"empresa": "Mi Empresa S.L."
},
"imageProcessed": false,
"imageMetadata": null,
"historyTruncated": false,
"historyLength": 15
}
}
{
"credentials_id": "credencial-gemini",
"system_prompt": "Eres un asistente de ventas amable. Ayuda a los clientes con informacion de productos.",
"model": "gemini-2.0-flash",
"memory_schema": {
"nombre": {
"pattern": "me llamo ([A-Za-z ]+)",
"required": true
},
"email": {
"pattern": "([a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,})",
"required": false
}
}
}
  • chatId: Unique conversation identifier (required).
  • prompt / content: User message.
  • filePath: Path to an image file for visual analysis (optional).
  • width, height, file_size: Image metadata (optional).
  • caption: Image caption (optional).
  • Google Gemini API: POST https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent
  • Authentication via X-goog-api-key header.
  • The default memory schema includes fields for: name, email, phone, national ID (DNI), foreigner ID (NIE), tax ID (CIF), passport, company, position, sector, products of interest, budget, urgency, city, and country.
  • History is automatically truncated to the last 10 messages if it exceeds 50000 characters, and to 30000 if it exceeds 60000.
  • Dynamic variables in system_prompt and prompt are processed before the model call.
  • The system prompt is only configured once per conversation (on the first message).
  • The continueOnError field in the configuration allows the flow to continue with an error message instead of stopping.
  • Supports images in formats: JPEG, PNG, GIF, WebP, BMP.
  • gemini - Gemini AI (base module without history)
  • Agent - AI Agent (multi-provider)