Skip to main content

What is an Agent Node?

The Agent Node is the core building block of Splox workflows. It’s an autonomous AI system that combines an LLM with tools, conversation memory, and iteration logic to complete tasks. Unlike a simple LLM call, an Agent Node can:
  • Reason and plan using large language models
  • Execute actions through autonomous tool calling
  • Iterate in a loop — calling tools, processing results, and deciding next steps
  • Maintain conversation context across multiple turns with built-in memory
  • Stream responses in real-time via SSE
  • Collaborate with other agents using configurable execution modes
  • Handle voice/realtime interactions with supported providers

Autonomous Execution

The agent decides which tools to use and when to stop based on the task

Tool Integration

Connect tools via tool edges for the agent to use autonomously

Built-in Memory

Conversation context is managed automatically — no separate memory node needed

Configurable Limits

Set max iterations and timeouts to control behavior

Real-time Streaming

Token-by-token output streaming with session-based chat support

Multi-Agent

Connect agents together with sync, async, fire-forget, or handoff modes

How It Works

The Agent node runs an autonomous loop that alternates between LLM reasoning and tool execution:
1

Receive Input

The Agent receives input from its parent node (Start payload, previous node output, or variable mappings). If context memory is configured, the input is appended as a user message.
2

LLM Completion

The configured LLM processes the full conversation context — system prompt, memory history, and current input — and produces either a text response or tool call requests.
3

Tool Execution

If the LLM requests tool calls, the Agent executes them via connected Tool nodes (through tool edges). Results are appended to the conversation context as tool result messages.
4

Iteration

The Agent loops back to the LLM with tool results. The loop continues until:
  • The LLM responds without tool calls (task complete)
  • The max iterations limit is reached
5

Output

The Agent’s final response is emitted as its output, flowing to downstream nodes via parallel edges.

Configuration

LLM Settings

Select the LLM provider and model for text generation.
FieldDescriptionDefault
Text LLM ProviderProvider slug (anthropic, openai, gemini, openrouter, etc.)anthropic
Text LLM ModelModel identifier (e.g., claude-sonnet-4-5-20250929)claude-sonnet-4-5-20250929
CredentialOptional custom API key credentialAccount default
You can also configure a Voice LLM for realtime audio interactions (supported by OpenAI and Gemini).
The system prompt instructs the agent on its role, behavior, and constraints. It supports Jinja2 templates for dynamic content.
You are a helpful research assistant. 
The user's name is {{ start.user_name }}.
Today's date is {{ start.date }}.
Use variable mappings to reference data from other nodes in your system prompt.
Additional configuration passed to the LLM:
FieldDescriptionDefault
Thinking TokensEnable extended thinking with token budget4096
ModalitiesOutput types (text, image)["text", "image"]
Cache ControlPrompt caching duration5m
Max Completion TokensMaximum output token length48000
Enable 1M ContextAllow extended context windowtrue

Tool Calling

Controls how the LLM selects tools:
ValueBehavior
autoLLM decides whether to use tools (default)
requiredLLM must call at least one tool per turn
noneTool calling is disabled
<tool_name>Force a specific tool to be called
What happens when a tool call fails:
ValueBehavior
continueReport the error to the LLM and let it decide what to do (default)
failStop the agent entirely and mark it as failed
Enable human-in-the-loop approval for sensitive tool calls. When enabled, the agent pauses and waits for user approval before executing specific tools.Default timeout: 5 minutes.
Attach reusable skill sets to the agent. Skills provide additional tools and capabilities that can be shared across multiple agents.

Iteration Control

FieldDescriptionDefaultRange
Max IterationsMaximum number of LLM → Tool → LLM cycles501–50
TimeoutMaximum execution time in minutes525,600 (1 year)
Max RetriesRetries on transient failures30–10
Setting max iterations too high can lead to excessive credit usage. Start with a lower value and increase as needed.

Context Memory

The Agent has built-in conversation memory that persists across executions. This replaces the need for separate memory nodes.
The Context Memory ID links conversations across executions. Typically set to {{ start.chat_id }} so the same chat session maintains continuity.Different memory IDs create separate conversation threads.
Template for the user message appended each turn. Default: {{ start.text }}This determines what the agent sees as the “user’s message” each time the workflow runs.
Control memory size to prevent context overflow:
FieldOptionsDefault
Limit Typetokens or messagestokens
Max TokensToken limit for conversation history70,000
Min Messages to KeepMinimum messages preserved during trimming70
Trim Strategydrop_oldest or drop_middledrop_oldest
Empty Tool ResultsReplace tool result content with empty strings to save tokensfalse
When enabled, old messages are summarized before being dropped, preserving key context.
FieldDescriptionDefault
Enable SummarizationAutomatically summarize before trimmingtrue
Summarize PromptCustom prompt for the summarization LLM callDefault summary prompt
Inject predefined messages into the conversation context. Useful for few-shot examples or persistent instructions.Enable Use Predefined Messages and add messages with specific roles (user, assistant, system).

Streaming

FieldDescriptionDefault
Enable StreamingStream tokens in real-time via SSEtrue
Chat Streaming Modesession for persistent chat connectionssession

Voice & Realtime

Voice agents are supported with OpenAI and Gemini providers.
FieldDescriptionDefault
Realtime EnabledEnable voice/audio interactionsfalse
Voice LLM ProviderProvider for voice modelgemini
Voice LLM ModelVoice-capable modelgemini-2.5-flash-native-audio-preview-12-2025

Agent-to-Agent Communication

When connecting Agent nodes together, the edge between them can be configured with an execution mode that controls how the child agent runs relative to the parent:
ModeParent BehaviorChild ResultBest For
syncBlocks until child completesReturned directlySequential delegation
async_injectContinues immediatelyAuto-injected when readyBackground processing
async_bidirContinues immediatelyChild has reply_to_parent toolOngoing collaboration
fire_forgetContinues immediatelyNot receivedLogging, notifications
handoffExits completelyChild takes overAgent routing, escalation

Configuration

When connecting agents, you can customize how they appear to each other:
FieldDescription
Child Tool NameHow the child agent appears as a tool to the parent
Child Tool DescriptionDescription of what the child agent does
Reply Tool NameFor async_bidir: name of the reply-to-parent tool
Reply Tool DescriptionFor async_bidir: description of the reply tool

Handles

HandlePositionTypeDescription
InputLeftExecutionReceives data from parent nodes
PARALLELRightExecutionSends output to next nodes on success
ERRORRightErrorRoutes to fallback nodes on failure
TOOLSRightToolConnects to Tool nodes for autonomous tool calling

Output

The Agent node produces an AgentResponse with:
FieldDescription
textThe agent’s final text response
tool_callsList of tool calls made during execution
reasoningThe agent’s reasoning/thinking output (if thinking tokens enabled)
iterationsNumber of LLM → Tool cycles performed
input_tokensTotal input tokens consumed
output_tokensTotal output tokens generated
Access these in downstream nodes via variable mappings: {{ agent_node_name.text }}, {{ agent_node_name.iterations }}, etc.

What’s Next?

Tool Node

Learn about tools that agents can use

Variable Mappings

Reference data between nodes with Pongo2 templates

Tool Edges

How to connect agents to tools

Node Lifecycle

Understand execution states and transitions