Skip to main content

Multi-Agent Systems

Multi-agent systems consist of multiple specialized agents that collaborate to solve complex tasks. Each agent has its own expertise, tools, and reasoning capabilities.

Specialization

Each agent focuses on specific tasks and has relevant tools

Collaboration

Agents communicate and share results through workflow execution

Scalability

Add new specialized agents without modifying existing ones

Parallel Execution

Multiple agents can work simultaneously on different subtasks

Communication Patterns

One coordinator agent delegates to specialized agentsWhen to use:
  • Clear task decomposition
  • Central decision making
  • Sequential or parallel subtasks
Example: Content creation pipeline
  1. Orchestrator receives request
  2. Triggers Research Agent
  3. Waits for research results
  4. Triggers Writing Agent with research
  5. Triggers Review Agent with draft
  6. Aggregates and returns final content
The orchestrator uses the Workflow Execution tool to trigger other agent workflows and waits for their results before proceeding.

Workflow Execution Tool

The Workflow Tool enables agents to call other workflows as tools, enabling multi-agent collaboration.

Configuration

To create a Workflow Tool that calls another agent:
1

Add Tool Node

Add a Tool Node to your agent workflow and connect it to the LLM via TOOLS handle
2

Configure Tool Type

In the Tool Node configuration:
  • Tool Type: Select “Workflow”
3

Select Target Workflow

  • Select Workflow: Choose which workflow (agent) to call
    • Only draft workflows appear in the dropdown
    • Each workflow represents a specialized agent
  • Select Start Node: Choose the entry point
    • Select from start nodes in the target workflow
    • Multiple start nodes = multiple entry points
4

Configure LLM Tool Calling

Configure the tool for LLM to call autonomously:
  • Tool Name: execute_research_agent
    • Use descriptive snake_case names
    • Indicates what the tool does
  • Tool Description:
    Call the research agent to gather information on a specific topic.
    
    Use this when you need to research a topic before taking action.
    The research agent will search multiple sources and return findings.
    
    • Be specific about when to use it
    • Explain what the agent does
    • Help LLM decide when to call this agent
  • Tool Output: Define the INPUT schema (parameters the LLM provides)
    {
      "type": "object",
      "properties": {
        "topic": {
          "type": "string",
          "description": "The research topic to investigate"
        },
        "depth": {
          "type": "string",
          "enum": ["basic", "detailed", "comprehensive"],
          "description": "How deep the research should be"
        }
      },
      "required": ["topic"]
    }
    
    Tool Output Schema defines what parameters the LLM must provide when calling this workflow tool, NOT what comes back from the workflow.

How It Works

When the LLM calls a Workflow Tool:
  1. LLM decides to call the tool (e.g., execute_research_agent)
  2. Generates tool call with parameters matching the Tool Output schema:
    {
      "topic": "artificial intelligence trends",
      "depth": "detailed"
    }
    
  3. Splox spawns new workflow execution
    • Target workflow starts at specified Start Node
    • Tool call parameters available in target workflow via {{ llm.message.tool_calls[0].function.arguments }}
    • Target workflow processes the request
  4. Target workflow executes completely
    • Agent runs its iterations
    • Uses its own tools and memory
    • Completes task autonomously
  5. Results return to calling agent
    • Output from target workflow’s End Node
    • Calling agent receives results in next iteration
    • Results stored in memory for context
  6. Calling agent continues with results
    • Processes returned data
    • Decides next action (call another tool, respond, etc.)

Example: Research Agent Tool

Orchestrator Agent calls Research Agent: Tool Configuration:
  • Tool Name: call_research_agent
  • Tool Description: “Search for information on a given topic”
  • Tool Output Schema:
    {
      "type": "object",
      "properties": {
        "topic": {
          "type": "string",
          "description": "What to research"
        }
      },
      "required": ["topic"]
    }
    
LLM Generates Tool Call:
{
  "name": "call_research_agent",
  "arguments": {
    "topic": "latest AI breakthroughs"
  }
}
Research Agent Workflow Executes:
  • Receives: { "topic": "latest AI breakthroughs" }
  • Searches multiple sources
  • Compiles findings
  • Returns results via End Node
Orchestrator Receives Results:
  • Gets research findings in memory
  • Can use results to make decisions
  • Can call other agents or respond to user

Features

Modular Agents

Each workflow is a reusable agent component

LLM Decides

LLM autonomously decides when to call other agents

Pass Parameters

LLM provides structured parameters via Tool Output schema

Receive Results

Complete workflow results return to calling agent’s memory
Workflow Tools are standard Tool Nodes configured with Tool Type = “Workflow”. They appear to the LLM as regular tools but trigger entire workflow executions behind the scenes.

Architecture Examples

Orchestrator Pattern

1

Create Orchestrator Agent

Build a coordinator agent with Workflow Execution tools:
  • execute_research_agent
  • execute_writing_agent
  • execute_review_agent
The orchestrator decides which agents to call and in what order
2

Create Specialized Agents

Build individual agents, each with domain-specific tools:Research Agent:
  • Web search tool
  • Document scraper
  • Data extraction
Writing Agent:
  • Content generation
  • Template formatting
  • Style checker
Review Agent:
  • Grammar check
  • Fact verification
  • Quality scoring
3

Connect Workflows

Configure Workflow Execution tools with target workflow IDs:
  • Each tool points to a specific agent workflow
  • Orchestrator can pass input data to agents
  • Agents return results to orchestrator
4

Execute Pipeline

User triggers orchestrator workflow:
  1. Orchestrator analyzes request
  2. Calls Research Agent with topic
  3. Receives research results
  4. Calls Writing Agent with research data
  5. Receives draft content
  6. Calls Review Agent with draft
  7. Returns final polished content

Actor Pattern

1

Create Peer Agents

Build multiple agents that can call each other:Support Agent:
  • Knowledge base search
  • Ticket creation
  • call_billing_agent tool
  • call_technical_agent tool
Billing Agent:
  • Invoice lookup
  • Payment processing
  • call_support_agent tool
  • call_technical_agent tool
Technical Agent:
  • System diagnostics
  • Error logs
  • call_support_agent tool
  • call_billing_agent tool
2

Enable Cross-Communication

Each agent has Workflow Execution tools pointing to other agents:
  • No central coordinator required
  • Agents decide when to consult peers
  • Flexible, emergent collaboration
3

Execute Collaboration

User contacts Support Agent:
  1. Support Agent searches knowledge base
  2. Doesn’t find answer, calls Billing Agent
  3. Billing Agent checks account, finds technical issue
  4. Billing Agent calls Technical Agent
  5. Technical Agent diagnoses problem
  6. Technical Agent responds to Billing Agent
  7. Billing Agent responds to Support Agent
  8. Support Agent provides final answer to user

Use Cases

Content Pipeline

Orchestrator Pattern
  • Research agent gathers information
  • Writing agent creates content
  • Review agent checks quality
  • Publishing agent distributes
Sequential pipeline with clear stages

Customer Service

Actor Pattern
  • Support agent handles requests
  • Billing agent manages payments
  • Technical agent fixes issues
  • Agents consult each other as needed
Peer-to-peer collaboration

Software Development

Hybrid Pattern
  • PM agent orchestrates sprints
  • Developer agents code features
  • QA agents run tests
  • DevOps agents deploy
Mix of orchestration and peer communication

Data Processing

Orchestrator Pattern
  • Ingestion agent loads data
  • Cleaning agent validates
  • Analysis agent generates insights
  • Reporting agent creates dashboards
Structured data pipeline

Best Practices

Define specific roles for each agent
  • Each agent should have a clear purpose
  • Avoid overlapping responsibilities
  • Specialize tools for each agent’s domain
  • Document agent capabilities
Example:
  • Research Agent → Information gathering only
  • Writing Agent → Content creation only
  • Review Agent → Quality checks only
Minimize agent-to-agent calls
  • Batch requests when possible
  • Pass complete context in tool calls
  • Avoid chatty back-and-forth
  • Use timeouts to prevent deadlocks
Each workflow execution has overhead - optimize communication patterns
Handle agent failures gracefully
  • Add error edges to Workflow Execution tools
  • Implement retry logic in orchestrator
  • Provide fallback strategies
  • Log failures for debugging
One agent failure shouldn’t crash the entire system
Track workflow state across agents
  • Use memory nodes for agent context
  • Pass relevant state in tool calls
  • Store intermediate results
  • Enable agent resumption after errors
Agents should maintain coherent state throughout collaboration
Monitor multi-agent costs
  • Each agent execution incurs LLM costs
  • Optimize number of agent calls
  • Use smaller models for simple agents
  • Cache repeated agent results
Multi-agent systems multiply costs - monitor carefully

Common Patterns

Sequential Pipeline

Use case: Content creation, data processing
User Input

Orchestrator

Agent 1 (Research)

Agent 2 (Writing)

Agent 3 (Review)

Final Output
Each agent completes before the next starts. Clear, predictable flow.

Parallel Execution

Use case: Independent subtasks, batch processing
User Input

Orchestrator
    ├─→ Agent 1 (Research Topic A)
    ├─→ Agent 2 (Research Topic B)
    └─→ Agent 3 (Research Topic C)
    ↓ (wait for all)
Aggregation

Final Output
Multiple agents run simultaneously. Faster for independent tasks.

Conditional Routing

Use case: Dynamic workflows, decision trees
User Input

Orchestrator (analyzes request)
    ├─→ If technical: Technical Agent
    ├─→ If billing: Billing Agent
    └─→ If general: Support Agent

Final Output
Orchestrator routes to appropriate agent based on request type.

Iterative Refinement

Use case: Quality improvement, code review
User Input

Agent 1 (Draft)

Agent 2 (Review) → If issues: back to Agent 1

Agent 3 (Final Polish)

Final Output
Agents collaborate in feedback loops until quality threshold met.

Monitoring Multi-Agent Systems

Agent Call Graph

Visualize agent interactions
  • Which agents called which?
  • How many hops between agents?
  • Identify bottlenecks
  • Detect circular dependencies

Execution Timing

Track agent performance
  • Time per agent
  • Waiting time between agents
  • Parallel execution efficiency
  • Total pipeline duration

Cost Attribution

Monitor spending per agent
  • LLM costs per agent
  • Tool execution costs
  • Total system cost
  • Cost per user request

Error Rates

Track agent failures
  • Which agents fail most?
  • Error propagation patterns
  • Retry success rates
  • Fallback effectiveness

What’s Next?