Multi-Agent Systems
Multi-agent systems consist of multiple specialized agents that collaborate to solve complex tasks. Each agent has its own expertise, tools, and reasoning capabilities.Specialization
Each agent focuses on specific tasks and has relevant tools
Collaboration
Agents communicate and share results through workflow execution
Scalability
Add new specialized agents without modifying existing ones
Parallel Execution
Multiple agents can work simultaneously on different subtasks
Communication Patterns
- Orchestrator Pattern
- Actor Pattern
- Hybrid Pattern
One coordinator agent delegates to specialized agentsWhen to use:
- Clear task decomposition
- Central decision making
- Sequential or parallel subtasks
- Orchestrator receives request
- Triggers Research Agent
- Waits for research results
- Triggers Writing Agent with research
- Triggers Review Agent with draft
- Aggregates and returns final content
Workflow Execution Tool
The Workflow Tool enables agents to call other workflows as tools, enabling multi-agent collaboration.Configuration
To create a Workflow Tool that calls another agent:1
Add Tool Node
Add a Tool Node to your agent workflow and connect it to the LLM via TOOLS handle
2
Configure Tool Type
In the Tool Node configuration:
- Tool Type: Select “Workflow”
3
Select Target Workflow
- Select Workflow: Choose which workflow (agent) to call
- Only draft workflows appear in the dropdown
- Each workflow represents a specialized agent
- Select Start Node: Choose the entry point
- Select from start nodes in the target workflow
- Multiple start nodes = multiple entry points
4
Configure LLM Tool Calling
Configure the tool for LLM to call autonomously:
-
Tool Name:
execute_research_agent- Use descriptive snake_case names
- Indicates what the tool does
-
Tool Description:
- Be specific about when to use it
- Explain what the agent does
- Help LLM decide when to call this agent
-
Tool Output: Define the INPUT schema (parameters the LLM provides)
Tool Output Schema defines what parameters the LLM must provide when calling this workflow tool, NOT what comes back from the workflow.
How It Works
When the LLM calls a Workflow Tool:-
LLM decides to call the tool (e.g.,
execute_research_agent) -
Generates tool call with parameters matching the Tool Output schema:
-
Splox spawns new workflow execution
- Target workflow starts at specified Start Node
- Tool call parameters available in target workflow via
{{ llm.message.tool_calls[0].function.arguments }} - Target workflow processes the request
-
Target workflow executes completely
- Agent runs its iterations
- Uses its own tools and memory
- Completes task autonomously
-
Results return to calling agent
- Output from target workflow’s End Node
- Calling agent receives results in next iteration
- Results stored in memory for context
-
Calling agent continues with results
- Processes returned data
- Decides next action (call another tool, respond, etc.)
Example: Research Agent Tool
Orchestrator Agent calls Research Agent: Tool Configuration:- Tool Name:
call_research_agent - Tool Description: “Search for information on a given topic”
- Tool Output Schema:
- Receives:
{ "topic": "latest AI breakthroughs" } - Searches multiple sources
- Compiles findings
- Returns results via End Node
- Gets research findings in memory
- Can use results to make decisions
- Can call other agents or respond to user
Features
Modular Agents
Each workflow is a reusable agent component
LLM Decides
LLM autonomously decides when to call other agents
Pass Parameters
LLM provides structured parameters via Tool Output schema
Receive Results
Complete workflow results return to calling agent’s memory
Workflow Tools are standard Tool Nodes configured with Tool Type = “Workflow”. They appear to the LLM as regular tools but trigger entire workflow executions behind the scenes.
Architecture Examples
Orchestrator Pattern
1
Create Orchestrator Agent
Build a coordinator agent with Workflow Execution tools:
execute_research_agentexecute_writing_agentexecute_review_agent
2
Create Specialized Agents
Build individual agents, each with domain-specific tools:Research Agent:
- Web search tool
- Document scraper
- Data extraction
- Content generation
- Template formatting
- Style checker
- Grammar check
- Fact verification
- Quality scoring
3
Connect Workflows
Configure Workflow Execution tools with target workflow IDs:
- Each tool points to a specific agent workflow
- Orchestrator can pass input data to agents
- Agents return results to orchestrator
4
Execute Pipeline
User triggers orchestrator workflow:
- Orchestrator analyzes request
- Calls Research Agent with topic
- Receives research results
- Calls Writing Agent with research data
- Receives draft content
- Calls Review Agent with draft
- Returns final polished content
Actor Pattern
1
Create Peer Agents
Build multiple agents that can call each other:Support Agent:
- Knowledge base search
- Ticket creation
call_billing_agenttoolcall_technical_agenttool
- Invoice lookup
- Payment processing
call_support_agenttoolcall_technical_agenttool
- System diagnostics
- Error logs
call_support_agenttoolcall_billing_agenttool
2
Enable Cross-Communication
Each agent has Workflow Execution tools pointing to other agents:
- No central coordinator required
- Agents decide when to consult peers
- Flexible, emergent collaboration
3
Execute Collaboration
User contacts Support Agent:
- Support Agent searches knowledge base
- Doesn’t find answer, calls Billing Agent
- Billing Agent checks account, finds technical issue
- Billing Agent calls Technical Agent
- Technical Agent diagnoses problem
- Technical Agent responds to Billing Agent
- Billing Agent responds to Support Agent
- Support Agent provides final answer to user
Use Cases
Content Pipeline
Orchestrator Pattern
- Research agent gathers information
- Writing agent creates content
- Review agent checks quality
- Publishing agent distributes
Customer Service
Actor Pattern
- Support agent handles requests
- Billing agent manages payments
- Technical agent fixes issues
- Agents consult each other as needed
Software Development
Hybrid Pattern
- PM agent orchestrates sprints
- Developer agents code features
- QA agents run tests
- DevOps agents deploy
Data Processing
Orchestrator Pattern
- Ingestion agent loads data
- Cleaning agent validates
- Analysis agent generates insights
- Reporting agent creates dashboards
Best Practices
Clear Agent Responsibilities
Clear Agent Responsibilities
Define specific roles for each agent
- Each agent should have a clear purpose
- Avoid overlapping responsibilities
- Specialize tools for each agent’s domain
- Document agent capabilities
- Research Agent → Information gathering only
- Writing Agent → Content creation only
- Review Agent → Quality checks only
Efficient Communication
Efficient Communication
Minimize agent-to-agent calls
- Batch requests when possible
- Pass complete context in tool calls
- Avoid chatty back-and-forth
- Use timeouts to prevent deadlocks
Error Handling
Error Handling
Handle agent failures gracefully
- Add error edges to Workflow Execution tools
- Implement retry logic in orchestrator
- Provide fallback strategies
- Log failures for debugging
State Management
State Management
Track workflow state across agents
- Use memory nodes for agent context
- Pass relevant state in tool calls
- Store intermediate results
- Enable agent resumption after errors
Cost Optimization
Cost Optimization
Monitor multi-agent costs
- Each agent execution incurs LLM costs
- Optimize number of agent calls
- Use smaller models for simple agents
- Cache repeated agent results
Common Patterns
Sequential Pipeline
Use case: Content creation, data processingParallel Execution
Use case: Independent subtasks, batch processingConditional Routing
Use case: Dynamic workflows, decision treesIterative Refinement
Use case: Quality improvement, code reviewMonitoring Multi-Agent Systems
Agent Call Graph
Visualize agent interactions
- Which agents called which?
- How many hops between agents?
- Identify bottlenecks
- Detect circular dependencies
Execution Timing
Track agent performance
- Time per agent
- Waiting time between agents
- Parallel execution efficiency
- Total pipeline duration
Cost Attribution
Monitor spending per agent
- LLM costs per agent
- Tool execution costs
- Total system cost
- Cost per user request
Error Rates
Track agent failures
- Which agents fail most?
- Error propagation patterns
- Retry success rates
- Fallback effectiveness

