Skip to content

Floo AI - Intelligent Assistant

Floo AI is the artificial intelligence assistant integrated into the platform. It uses language models (GPT-4, GPT-4o) to help with the creation, optimization, and debugging of workflows.

Describe in natural language what you need and Floo AI generates the complete workflow with nodes and connections.

Example:

"When I receive an order via webhook, check the stock by calling my API,
if there is stock send a confirmation email, if not notify via Slack"

Floo AI automatically generates:

  • Webhook node as trigger
  • HTTP node to query the stock API
  • Decision node to evaluate the response
  • SendMail node on the true branch
  • Slack node on the false branch
  • Connections between all nodes

When describing a task, Floo AI suggests the most appropriate nodes. For example, if you type “send data to a PostgreSQL database”, it suggests the sqlQueryPostgres node with the relevant configuration.

If the AI service is unavailable, the system uses local keyword search as a fallback.

Floo AI analyzes an existing workflow and suggests improvements:

  • Redundant nodes that can be removed
  • Parallelization opportunities
  • Improvements to decision structures
  • More efficient use of iterators

When a node fails, Floo AI analyzes the error in context and suggests:

  • Probable cause of the error
  • Recommended solution
  • Correct node configuration
  • Also offers quick suggestions without consuming AI tokens

An integrated chat where you can ask questions about:

  • How to configure a specific node
  • Which module to use for a task
  • How to solve a problem in your workflow
  • Search for modules by functionality
EndpointMethodDescription
/api/ai/generate-workflowPOSTGenerate workflow from description
/api/ai/suggest-nodesPOSTSuggest nodes for a task
/api/ai/analyze-errorPOSTAnalyze a node error
/api/ai/chatPOSTChat with the assistant
/api/ai/optimize-workflowPOSTOptimize existing workflow
/api/ai/modulesGETList all available modules
/api/ai/modules/search/:queryGETSearch modules
/api/ai/categoriesGETList module categories
/api/ai/token-estimatePOSTEstimate tokens before executing
/api/ai/usage/statsGETMonthly usage statistics
/api/ai/usage/limitsGETPlan limits vs current usage
  • Each AI operation consumes tokens from the configured model
  • The system estimates tokens before executing to avoid overages
  • Usage is tracked per client and per operation type
  • Limits depend on the contracted plan (queryable at /api/ai/usage/limits)