Floo AI - Intelligent Assistant
Floo AI is the artificial intelligence assistant integrated into the platform. It uses language models (GPT-4, GPT-4o) to help with the creation, optimization, and debugging of workflows.
Capabilities
Section titled “Capabilities”a) Workflow generation from description
Section titled “a) Workflow generation from description”Describe in natural language what you need and Floo AI generates the complete workflow with nodes and connections.
Example:
"When I receive an order via webhook, check the stock by calling my API,if there is stock send a confirmation email, if not notify via Slack"Floo AI automatically generates:
- Webhook node as trigger
- HTTP node to query the stock API
- Decision node to evaluate the response
- SendMail node on the true branch
- Slack node on the false branch
- Connections between all nodes
b) Node suggestions
Section titled “b) Node suggestions”When describing a task, Floo AI suggests the most appropriate nodes. For example, if you type “send data to a PostgreSQL database”, it suggests the sqlQueryPostgres node with the relevant configuration.
If the AI service is unavailable, the system uses local keyword search as a fallback.
c) Optimization of existing workflows
Section titled “c) Optimization of existing workflows”Floo AI analyzes an existing workflow and suggests improvements:
- Redundant nodes that can be removed
- Parallelization opportunities
- Improvements to decision structures
- More efficient use of iterators
d) Error analysis
Section titled “d) Error analysis”When a node fails, Floo AI analyzes the error in context and suggests:
- Probable cause of the error
- Recommended solution
- Correct node configuration
- Also offers quick suggestions without consuming AI tokens
e) Conversational chat
Section titled “e) Conversational chat”An integrated chat where you can ask questions about:
- How to configure a specific node
- Which module to use for a task
- How to solve a problem in your workflow
- Search for modules by functionality
AI API Endpoints
Section titled “AI API Endpoints”| Endpoint | Method | Description |
|---|---|---|
/api/ai/generate-workflow | POST | Generate workflow from description |
/api/ai/suggest-nodes | POST | Suggest nodes for a task |
/api/ai/analyze-error | POST | Analyze a node error |
/api/ai/chat | POST | Chat with the assistant |
/api/ai/optimize-workflow | POST | Optimize existing workflow |
/api/ai/modules | GET | List all available modules |
/api/ai/modules/search/:query | GET | Search modules |
/api/ai/categories | GET | List module categories |
/api/ai/token-estimate | POST | Estimate tokens before executing |
/api/ai/usage/stats | GET | Monthly usage statistics |
/api/ai/usage/limits | GET | Plan limits vs current usage |
Token Consumption
Section titled “Token Consumption”- Each AI operation consumes tokens from the configured model
- The system estimates tokens before executing to avoid overages
- Usage is tracked per client and per operation type
- Limits depend on the contracted plan (queryable at
/api/ai/usage/limits)