LLM Node
Purpose: Execute AI model completions with tool calling, streaming, and memory The LLM node is the core of agentic workflows, enabling AI models to generate responses, make decisions, and use tools.

Configuration
Label
Label
Description: A user-defined name to identify this LLM node in your workflow.Type: StringRequired: YesExample:
Integration
Integration
Description: Select the LLM provider integration you want to use (OpenAI, Anthropic, OpenRouter, etc.).Type: Select (Dynamic)Required: YesOptions: Dynamically loaded from your connected LLM integrations:
- OpenAI
- Anthropic
- OpenRouter
- Custom providers
- Go to Integrations page
- Connect your LLM provider
- The integration appears in this dropdown
You must connect an LLM integration before you can use the LLM node. Multiple integrations from the same provider can be added with different labels.
Model
Model
Description: Select the specific AI model to use for completions.Type: Select (Dynamic)Required: YesVisibility: Only shown after Integration is selectedOptions: Dynamically filtered based on selected integration:
- OpenAI: GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo, o1-mini, o1-preview
- Anthropic: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
- OpenRouter: 200+ models from various providers
Response Mode
Response Mode
Description: Defines the format of the model’s output.Type: SelectRequired: YesVisibility: Only shown after Model is selectedOptions:
- Tool Calling - LLM can call connected tools and functions (for agentic workflows)
- JSON - LLM returns structured JSON matching a defined schema
- Text - LLM returns plain text response (default chat mode)
- Tool Calling: Building agents that need to take actions (search, API calls, code execution)
- JSON: Structured data extraction, form filling, classification tasks
- Text: Simple chat, content generation, Q&A
Tool Choice
Tool Choice
Description: Controls how the LLM selects which tools to call.Type: SelectRequired: ConditionalVisibility: Only shown when Response Mode is “Tool Calling”Options:
- Auto - LLM decides whether to call tools or respond with text
- Required - LLM must call at least one tool (no text-only responses)
- None - LLM cannot call tools (tools are visible but not callable)
- Auto: Standard agent behavior - LLM chooses when tools are needed
- Required: Force tool usage (e.g., always run a search before answering)
- None: Show tools to LLM for context but prevent calling them
Schema
Schema
Description: JSON schema that defines the structure of the expected response.Type: Schema BuilderRequired: ConditionalVisibility: Only shown when Response Mode is “JSON”How It Works:
- Define the JSON structure you want the LLM to return
- LLM output will conform to this schema
- Useful for data extraction, classification, form filling
Use the Schema Builder UI to visually create JSON schemas without writing JSON manually.
Strict Mode
Strict Mode
Description: Enforces strict validation against the defined schema in JSON response mode.Type: BooleanRequired: NoDefault: trueVisibility: Only shown when Response Mode is “JSON”Options:
- true - LLM output must exactly match schema (recommended)
- false - Allow minor schema deviations
- Schema is complex and LLM struggles to match it perfectly
- You want LLM to have flexibility in output structure
System Prompt
System Prompt
Description: System instructions that define the LLM’s behavior, personality, and task.Type: Editor (supports template variables)Required: YesFeatures:
- Template Variables: Use
{{variable_name}}to inject data from previous nodes - Multi-line: Write detailed instructions
- Markdown Support: Format prompts for clarity
Additional LLM Config
Additional LLM Config
Description: Advanced model parameters for fine-tuning LLM behavior.Type: Input Schema (Dynamic)Required: NoCommon Parameters:Guidelines:
- temperature (0-2) - Controls randomness (0 = focused, 2 = creative)
- max_tokens - Maximum response length
- top_p (0-1) - Nucleus sampling for diversity
- frequency_penalty (-2 to 2) - Reduce repetition
- presence_penalty (-2 to 2) - Encourage new topics
- stop - Stop sequences to end generation
- temperature = 0: Deterministic, factual tasks (data extraction, classification)
- temperature = 0.7: Balanced creativity (chat, support)
- temperature = 1.5+: High creativity (storytelling, brainstorming)
Available parameters depend on the selected model. Some parameters may not be supported by all providers.
Node Handles
The LLM node has specialized input and output handles for different workflow paths:- Input Handle
- Output Handles
Left Side - Main InputReceives data from previous nodes in the workflow. This is the primary execution trigger and data source for the LLM.Accepts:
- Workflow context
- User input/messages
- Previous node outputs
- Template variables
Tool Calling Behavior
The LLM node’s tool calling behavior differs based on its context in the workflow:- Outside Subflows (Main Workflow)
- Inside Subflows (Agent Pattern)
Reactive Multi-Turn - One Call Per ExecutionWhen an LLM node is in the main workflow (not inside a subflow), tool execution is one-directional within a single workflow execution:Flow (Single Execution):Characteristics:
- LLM analyzes the request
- Decides which tool to use (if any)
- Calls the selected tool via TOOLS handle
- Tool executes and completes
- Results go to the next node (not back to LLM in same execution)
- Tool results are stored in memory (if memory connected)
- Workflow completes
- LLM sees previous tool calls and results in memory
- Can decide to call tools again based on memory
- This is reactive - requires external trigger between tool calls
- ✅ Tool results stored in memory
- ✅ Multi-turn across executions (reactive)
- ❌ No loop within single execution
- ❌ Requires external trigger (user message) between tool calls
- Chat-based assistants (user triggers each step)
- Interactive workflows
- Human-in-the-loop tool calling
- Conversational agents with memory
Reactive Multi-Turn: The LLM can use tools across multiple executions via memory, but needs external triggers. It’s conversational, not autonomous.
| Context | Tool Results | Multi-Turn Type | Trigger Required | Use Case |
|---|---|---|---|---|
| Main Workflow | → Stored in Memory | Reactive (across executions) | ✅ User message | Chat assistants |
| Inside Subflow | → Back to LLM | Proactive (within execution) | ❌ Autonomous | Agent patterns |
- Reactive: Conversational - user drives the conversation
- Proactive: Autonomous - LLM drives the execution
Key Features
- Model Execution
- Tool Calling
- Memory Integration
- Multi-Provider Support: OpenAI, Anthropic, OpenRouter, custom providers
- Model Selection: Choose from hundreds of models
- System Prompts: Define AI behavior with template variables
- Streaming: Real-time response generation
- Temperature Control: Adjust creativity vs. consistency
Example Use Cases
Customer Support
Build AI agents with CRM tool access for automated support
Research Assistant
Create agents that search, analyze, and synthesize information
Code Generation
Generate and execute code in sandboxes with multi-step refinement
Content Creation
Multi-step content generation with research and fact-checking

