Skip to main content

LLM Node

Purpose: Execute AI model completions with tool calling, streaming, and memory The LLM node is the core of agentic workflows, enabling AI models to generate responses, make decisions, and use tools.
LLM node visual representation
LLM node visual representation

Configuration

Description: A user-defined name to identify this LLM node in your workflow.Type: StringRequired: YesExample:
"Customer Support Agent"
"Code Generator"
"Research Assistant"
Description: Select the LLM provider integration you want to use (OpenAI, Anthropic, OpenRouter, etc.).Type: Select (Dynamic)Required: YesOptions: Dynamically loaded from your connected LLM integrations:
  • OpenAI
  • Anthropic
  • OpenRouter
  • Custom providers
Setup:
  1. Go to Integrations page
  2. Connect your LLM provider
  3. The integration appears in this dropdown
You must connect an LLM integration before you can use the LLM node. Multiple integrations from the same provider can be added with different labels.
Description: Select the specific AI model to use for completions.Type: Select (Dynamic)Required: YesVisibility: Only shown after Integration is selectedOptions: Dynamically filtered based on selected integration:
  • OpenAI: GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo, o1-mini, o1-preview
  • Anthropic: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
  • OpenRouter: 200+ models from various providers
Example:
gpt-4o
claude-3-5-sonnet-20241022
deepseek/deepseek-chat
Different models have different capabilities, costs, and context windows. Check the Providers docs for pricing and features.
Description: Defines the format of the model’s output.Type: SelectRequired: YesVisibility: Only shown after Model is selectedOptions:
  • Tool Calling - LLM can call connected tools and functions (for agentic workflows)
  • JSON - LLM returns structured JSON matching a defined schema
  • Text - LLM returns plain text response (default chat mode)
When to Use:
  • Tool Calling: Building agents that need to take actions (search, API calls, code execution)
  • JSON: Structured data extraction, form filling, classification tasks
  • Text: Simple chat, content generation, Q&A
Example:
tool_calling  # For agents with tools
json          # For structured outputs
text          # For chat/content
Description: Controls how the LLM selects which tools to call.Type: SelectRequired: ConditionalVisibility: Only shown when Response Mode is “Tool Calling”Options:
  • Auto - LLM decides whether to call tools or respond with text
  • Required - LLM must call at least one tool (no text-only responses)
  • None - LLM cannot call tools (tools are visible but not callable)
Use Cases:
  • Auto: Standard agent behavior - LLM chooses when tools are needed
  • Required: Force tool usage (e.g., always run a search before answering)
  • None: Show tools to LLM for context but prevent calling them
Example:
auto      # Let LLM decide
required  # Always call tools
none      # No tool calls
Description: JSON schema that defines the structure of the expected response.Type: Schema BuilderRequired: ConditionalVisibility: Only shown when Response Mode is “JSON”How It Works:
  • Define the JSON structure you want the LLM to return
  • LLM output will conform to this schema
  • Useful for data extraction, classification, form filling
Example Schema:
{
  "type": "object",
  "properties": {
    "customer_name": { "type": "string" },
    "email": { "type": "string", "format": "email" },
    "issue_category": { 
      "type": "string",
      "enum": ["billing", "technical", "general"]
    },
    "priority": { "type": "integer", "minimum": 1, "maximum": 5 }
  },
  "required": ["customer_name", "email", "issue_category"]
}
Use the Schema Builder UI to visually create JSON schemas without writing JSON manually.
Description: Enforces strict validation against the defined schema in JSON response mode.Type: BooleanRequired: NoDefault: trueVisibility: Only shown when Response Mode is “JSON”Options:
  • true - LLM output must exactly match schema (recommended)
  • false - Allow minor schema deviations
When to Disable:
  • Schema is complex and LLM struggles to match it perfectly
  • You want LLM to have flexibility in output structure
Disabling strict mode may result in invalid JSON outputs that don’t match your schema.
Description: System instructions that define the LLM’s behavior, personality, and task.Type: Editor (supports template variables)Required: YesFeatures:
  • Template Variables: Use {{variable_name}} to inject data from previous nodes
  • Multi-line: Write detailed instructions
  • Markdown Support: Format prompts for clarity
Example:
You are a customer support agent for {{company_name}}.

Your role:
- Help customers with their questions
- Be friendly and professional
- Escalate complex issues to human agents

Customer Info:
- Name: {{customer.name}}
- Tier: {{customer.tier}}
- Issue: {{customer.issue}}

Respond concisely and helpfully.
Best Practices:
  • Be specific about the task
  • Provide examples when possible
  • Use variables to personalize responses
  • Test different prompts to optimize performance
Description: Advanced model parameters for fine-tuning LLM behavior.Type: Input Schema (Dynamic)Required: NoCommon Parameters:
  • temperature (0-2) - Controls randomness (0 = focused, 2 = creative)
  • max_tokens - Maximum response length
  • top_p (0-1) - Nucleus sampling for diversity
  • frequency_penalty (-2 to 2) - Reduce repetition
  • presence_penalty (-2 to 2) - Encourage new topics
  • stop - Stop sequences to end generation
Example:
{
  "temperature": 0.7,
  "max_tokens": 1000,
  "top_p": 0.9,
  "frequency_penalty": 0.5
}
Guidelines:
  • temperature = 0: Deterministic, factual tasks (data extraction, classification)
  • temperature = 0.7: Balanced creativity (chat, support)
  • temperature = 1.5+: High creativity (storytelling, brainstorming)
Available parameters depend on the selected model. Some parameters may not be supported by all providers.

Node Handles

The LLM node has specialized input and output handles for different workflow paths:
Left Side - Main InputReceives data from previous nodes in the workflow. This is the primary execution trigger and data source for the LLM.Accepts:
  • Workflow context
  • User input/messages
  • Previous node outputs
  • Template variables

Tool Calling Behavior

The LLM node’s tool calling behavior differs based on its context in the workflow:
Reactive Multi-Turn - One Call Per ExecutionWhen an LLM node is in the main workflow (not inside a subflow), tool execution is one-directional within a single workflow execution:Flow (Single Execution):
  1. LLM analyzes the request
  2. Decides which tool to use (if any)
  3. Calls the selected tool via TOOLS handle
  4. Tool executes and completes
  5. Results go to the next node (not back to LLM in same execution)
  6. Tool results are stored in memory (if memory connected)
  7. Workflow completes
Next Execution (User Triggers Again):
  • LLM sees previous tool calls and results in memory
  • Can decide to call tools again based on memory
  • This is reactive - requires external trigger between tool calls
Key Point: Within a single execution, the LLM cannot loop back to see tool results. However, tool results are stored in memory and available on subsequent workflow triggers.Example:
Execution 1: User asks → LLM calls search tool → Tool results stored in memory → End

Execution 2: User asks follow-up → LLM sees search results in memory → LLM calls another tool → End

Execution 3: User continues → LLM sees all previous tool calls → Generates final answer
Characteristics:
  • ✅ Tool results stored in memory
  • ✅ Multi-turn across executions (reactive)
  • ❌ No loop within single execution
  • ❌ Requires external trigger (user message) between tool calls
Use Cases:
  • Chat-based assistants (user triggers each step)
  • Interactive workflows
  • Human-in-the-loop tool calling
  • Conversational agents with memory
Reactive Multi-Turn: The LLM can use tools across multiple executions via memory, but needs external triggers. It’s conversational, not autonomous.
Architecture Summary:
ContextTool ResultsMulti-Turn TypeTrigger RequiredUse Case
Main Workflow→ Stored in MemoryReactive (across executions)✅ User messageChat assistants
Inside Subflow→ Back to LLMProactive (within execution)❌ AutonomousAgent patterns
Key Distinction:
  • Reactive: Conversational - user drives the conversation
  • Proactive: Autonomous - LLM drives the execution

Key Features

  • Multi-Provider Support: OpenAI, Anthropic, OpenRouter, custom providers
  • Model Selection: Choose from hundreds of models
  • System Prompts: Define AI behavior with template variables
  • Streaming: Real-time response generation
  • Temperature Control: Adjust creativity vs. consistency

Example Use Cases

Customer Support

Build AI agents with CRM tool access for automated support

Research Assistant

Create agents that search, analyze, and synthesize information

Code Generation

Generate and execute code in sandboxes with multi-step refinement

Content Creation

Multi-step content generation with research and fact-checking

What’s Next?