Tuesday, March 31, 2026

AGENTIC AI WITH N8N: A COMPLETE DEVELOPER'S TUTORIAL

 




TABLE OF CONTENTS

  Part 1: The Big Picture
    Chapter 1  - What Is a Workflow Engine and Why Should You Care?
    Chapter 2  - Meet N8N: The Workflow Engine Built for the AI Era
    Chapter 3  - Installing and Deploying N8N

  Part 2: N8N Fundamentals
    Chapter 4  - Your First Workflow: Nodes, Connections, and Triggers
    Chapter 5  - Working with Data: JSON, Expressions, and the Code Node
    Chapter 6  - Connecting to the Outside World: HTTP Requests and Webhooks

  Part 3: Building Agentic AI
    Chapter 7  - What Is Agentic AI? The Theory Behind the Magic
    Chapter 8  - Your First AI Agent: Connecting an LLM to N8N
    Chapter 9  - Local LLMs with Ollama: Private, Free, and Powerful
    Chapter 10 - Giving Your Agent a Memory: Vector Stores and RAG
    Chapter 11 - Giving Your Agent Tools: HTTP Request Tools and Custom Actions
    Chapter 12 - Multi-Agent Orchestration: Building Agent Teams
    Chapter 13 - A Complete Agentic AI Project: The Research Assistant

  Appendix A  - Environment Variables Reference
  Appendix B  - Troubleshooting Common Issues

PART 1: THE BIG PICTURE


CHAPTER 1: WHAT IS A WORKFLOW ENGINE AND WHY SHOULD YOU CARE?

Imagine you are the only developer at a small company. Every morning you manually download a CSV file from your sales platform, paste it into a spreadsheet, calculate some totals, email a summary to your manager, and then post a brief update to the company Slack channel. This takes you about forty-five minutes each day. Multiply that by two hundred and fifty working days per year and you have spent more than one hundred and eighty hours on a single repetitive task. That is more than four full work weeks doing something a machine could do while you sleep.

This is the problem that workflow engines exist to solve. A workflow engine is a software system that allows you to define, execute, and monitor automated sequences of tasks. You describe what should happen, in what order, under what conditions, and the engine takes care of actually doing it. The tasks in a workflow can involve calling APIs, transforming data, sending messages, querying databases, running code, or triggering other workflows. The engine handles scheduling, error recovery, retries, logging, and the plumbing that connects everything together.

Workflow engines have existed in enterprise software for decades. Early systems like IBM MQ Series and Microsoft BizTalk were designed for large corporations integrating mainframe systems. They were powerful but required specialized knowledge, expensive licenses, and teams of consultants to operate. The modern generation of workflow engines is dramatically more accessible. Tools like Apache Airflow, Prefect, and Temporal brought workflow automation into the Python ecosystem. Zapier and Make (formerly Integromat) brought it to non-technical users through visual, drag-and-drop interfaces. N8N sits in a fascinating middle ground: it has the visual interface of a consumer tool but the power, extensibility, and self-hosting capability of a developer-grade platform.

To understand why workflow engines matter for AI development specifically, consider what an AI application actually does at runtime. It receives some input, calls a language model API, possibly calls several other APIs to gather context, processes the responses, stores results somewhere, and returns an output. Each of those steps is a node in a workflow. The logic that decides which step to execute next, based on the results of the previous step, is exactly what a workflow engine manages. When you build an AI agent, you are essentially building a sophisticated workflow where one of the nodes happens to be a large language model that can dynamically decide which other nodes to invoke. A workflow engine gives you the infrastructure to build, test, monitor, and scale that system without writing hundreds of lines of boilerplate code.

The key capabilities that any serious workflow engine must provide include reliable execution with retry logic when things fail, a way to pass data between steps, scheduling so workflows can run automatically at specified times, triggering from external events like webhooks, logging and observability so you can see what happened and debug problems, and some mechanism for handling secrets and credentials securely. N8N provides all of these out of the box, and as you will see throughout this tutorial, it adds a rich set of AI-specific capabilities on top of that foundation.


CHAPTER 2: MEET N8N: THE WORKFLOW ENGINE BUILT FOR THE AI ERA

N8N (pronounced "n-eight-n", short for "nodemation") was created by Jan Oberhauser and first released in 2019. The name comes from the pattern of having eight letters between the first "n" and the last "n" in the word "automation". It is open-source software licensed under the Sustainable Use License, which means you can self-host it for free for most purposes. A cloud-hosted version is also available at n8n.io if you prefer not to manage your own infrastructure.

What makes N8N particularly compelling for AI development in 2025 is the combination of several factors that no other tool currently matches all at once.

N8N is self-hostable, which means your data, your API keys, and your workflows stay on your own infrastructure. When you are building AI systems that process sensitive business data or personal information, this matters enormously. You are not sending your data through a third-party cloud service whose privacy practices you cannot fully audit.

N8N has deep, first-class integration with LangChain, the most widely used framework for building LLM-powered applications. Rather than requiring you to write LangChain code from scratch, N8N exposes LangChain's concepts as visual nodes: AI Agent nodes, Memory nodes, Vector Store nodes, Embedding nodes, and Tool nodes. You can build sophisticated LangChain pipelines visually, and when you need custom behavior, you can drop into a Code node and write JavaScript or Python.

N8N has over five hundred pre-built integrations with external services. This means your AI agent can natively interact with Gmail, Google Sheets, Slack, GitHub, Notion, Airtable, PostgreSQL, MySQL, Stripe, Salesforce, and hundreds of other services without you writing any integration code. Each of these integrations handles authentication, rate limiting, and response parsing for you.

N8N supports a powerful multi-agent architecture through its sub-workflow system. You can build individual specialized agents as separate workflows and then have an orchestrator workflow call them as tools using the dedicated Call n8n Workflow Tool node. This is exactly the architecture that modern agentic AI research recommends: a coordinator that understands the big picture delegating to specialists that are deeply capable in narrow domains.

N8N is actively developed and has a large, growing community. The N8N team releases updates frequently, and the community has built thousands of workflow templates that you can import and adapt. For AI development specifically, new nodes and capabilities are being added at a rapid pace to keep up with the fast-moving LLM ecosystem. As of 2025, n8n ships over 70 dedicated AI nodes covering LLMs, embeddings, vector databases, speech, OCR, and image models.

Before we go any further, let us establish some vocabulary that we will use throughout this tutorial. A workflow is a complete automation pipeline, a directed graph of connected nodes. A node is a single step in a workflow that performs one specific action. A trigger is a special type of node that starts the workflow in response to some event. A connection is the arrow between two nodes that defines both the execution order and the data flow path. An execution is one complete run of a workflow from trigger to completion. Credentials are securely stored authentication details like API keys and passwords that nodes use to connect to external services.

Now let us get N8N running on your machine.


CHAPTER 3: INSTALLING AND DEPLOYING N8N

N8N can be installed in two primary ways: using Docker or using npm (Node Package Manager). Docker is strongly recommended for anything beyond quick local experimentation because it provides isolation, reproducibility, and makes upgrading straightforward. The npm method is fine for development and learning, which is what we will focus on first.

System Requirements

Before you begin, make sure your machine meets these requirements. You need at least a dual-core CPU, though four cores will give you a noticeably better experience when running local AI models alongside N8N. You need at least 2 GB of RAM, though 8 GB is strongly recommended if you plan to run Ollama for local LLMs simultaneously. You need at least 10 GB of free disk space. If you plan to use Docker, you need Docker Desktop installed. If you plan to use npm, you need Node.js version 18 or higher installed (Node.js 20 LTS is the recommended choice for stability).

Method 1: Installing with npm (Quickest for Learning)

Open your terminal and run the following command to install N8N globally on your system:

npm install n8n -g

On Linux or macOS, you may need to prefix this with sudo if you encounter permission errors. Once the installation completes, start N8N with:

n8n start

N8N will initialize its local SQLite database, start its internal server, and print a message telling you it is ready. Open your browser and navigate to:

http://localhost:5678

You will see the N8N setup screen where you create your owner account. Enter your email address and choose a password. This account is the administrator of your local N8N instance. After creating the account, you will land on the N8N canvas.

To keep N8N running in the background even after you close your terminal, install PM2, which is a process manager for Node.js applications:

npm install pm2 -g
pm2 start n8n
pm2 save

The pm2 save command ensures N8N restarts automatically if your machine reboots.


Figure 1 — The N8N Interface Layout

┌─────────────────────────────────────────────────────────────────────────────┐
│  N8N Interface                                                              │
├──────────────┬──────────────────────────────────────────────────────────────┤
│              │  TOOLBAR: [Save] [Execute] [Active Toggle] [Zoom]            │
│  SIDEBAR     ├──────────────────────────────────────────────────────────────┤
│              │                                                              │
│ ▶ Workflows  │                    CANVAS (infinite workspace)               │
│ ▶ Credentials│                                                              │
│ ▶ Executions │    ┌──────────────┐        ┌──────────────┐                  │
│ ▶ Templates  │    │ Manual       │───────▶│ HTTP Request │                  │
│ ▶ Variables  │    │ Trigger      │        │ Node         │                  │
│              │    └──────────────┘        └──────────────┘                  │
│              │                                                              │
│              │    Click any node to open its configuration panel ──────▶    │
│              │                                                              │
│              ├──────────────────────────────────────────────────────────────┤
│              │  NODE CONFIG PANEL (opens on right when node is selected)    │
│              │  ┌──────────────────────────────────────────────────────┐    │
│              │  │ Node Name │ Parameters │ Settings │ Notes            │    │
│              │  └──────────────────────────────────────────────────────┘    │
└──────────────┴──────────────────────────────────────────────────────────────┘

Method 2: Installing with Docker (Recommended for Production)

Docker provides a clean, isolated environment for N8N. First, create a persistent volume so your workflows and data survive container restarts:

docker volume create n8n_data

Then start N8N in a Docker container:

docker run -d \
  --name n8n \
  -p 5678:5678 \
  -v n8n_data:/home/node/.n8n \
  n8nio/n8n

The -d flag runs the container in the background. The --name n8n gives the container a memorable name. The -p 5678:5678 maps port 5678 on your host machine to port 5678 inside the container. The -v n8n_data:/home/node/.n8nmounts your persistent volume so data is saved even if you stop or remove the container.

Method 3: Production Docker Compose with PostgreSQL (Recommended for Teams)

For production deployments, Docker Compose with PostgreSQL provides far better performance and reliability than SQLite. Create two files in the same directory.

First, create a .env file to store your secrets and configuration:

# ── N8N Application Settings ──────────────────────────────────────────────
N8N_PROTOCOL=https
N8N_HOST=your.n8n.domain.com
WEBHOOK_URL=https://your.n8n.domain.com/
# Generate with: openssl rand -hex 32
N8N_ENCRYPTION_KEY=your_strong_random_encryption_key_here
GENERIC_TIMEZONE=America/New_York
TZ=America/New_York

# ── PostgreSQL Database Settings ──────────────────────────────────────────
# DB_TYPE must be exactly "postgresdb" (not "postgres")
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=postgres
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD=your_secure_postgres_password_here

# These are used by the postgres container itself
POSTGRES_USER=n8n
POSTGRES_PASSWORD=your_secure_postgres_password_here
POSTGRES_DB=n8n

# ── Redis (Optional — uncomment to enable queue mode) ─────────────────────
# QUEUE_HEALTH_CHECK_ACTIVE=true
# QUEUE_BULL_REDIS_HOST=redis
# REDIS_HOST=redis

Then create the docker-compose.yml file:

version: '3.8'

services:

  # ── PostgreSQL Database ────────────────────────────────────────────────
  postgres:
    image: postgres:16
    restart: unless-stopped
    environment:
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${POSTGRES_DB}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 5

  # ── N8N Application ───────────────────────────────────────────────────
  n8n:
    image: n8nio/n8n
    restart: unless-stopped
    ports:
      - "5678:5678"
    environment:
      - N8N_PROTOCOL=${N8N_PROTOCOL}
      - N8N_HOST=${N8N_HOST}
      - WEBHOOK_URL=${WEBHOOK_URL}
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
      - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
      - TZ=${TZ}
      - DB_TYPE=${DB_TYPE}
      - DB_POSTGRESDB_HOST=${DB_POSTGRESDB_HOST}
      - DB_POSTGRESDB_DATABASE=${DB_POSTGRESDB_DATABASE}
      - DB_POSTGRESDB_USER=${DB_POSTGRESDB_USER}
      - DB_POSTGRESDB_PASSWORD=${DB_POSTGRESDB_PASSWORD}
      # Uncomment the lines below if using Redis for queue mode:
      # - QUEUE_HEALTH_CHECK_ACTIVE=${QUEUE_HEALTH_CHECK_ACTIVE}
      # - QUEUE_BULL_REDIS_HOST=${QUEUE_BULL_REDIS_HOST}
      # - REDIS_HOST=${REDIS_HOST}
    volumes:
      - n8n_data:/home/node/.n8n
    healthcheck:
      test: ["CMD-SHELL",
             "wget --no-verbose --tries=1 --spider http://localhost:5678/healthz || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
    # n8n will not start until postgres passes its health check
    depends_on:
      postgres:
        condition: service_healthy

  # ── Redis (Optional — uncomment for queue mode) ───────────────────────
  # redis:
  #   image: redis:7
  #   restart: unless-stopped
  #   volumes:
  #     - redis_data:/data
  #   healthcheck:
  #     test: ["CMD", "redis-cli", "ping"]
  #     interval: 5s
  #     timeout: 3s
  #     retries: 5

volumes:
  n8n_data:
  postgres_data:
  # redis_data:   # Uncomment if using Redis

Before running: generate your encryption key with openssl rand -hex 32, replace all placeholder values in the .env file, and set up a reverse proxy (Nginx or Caddy) for HTTPS in production. The depends_on with condition: service_healthy ensures n8n never starts before PostgreSQL is fully ready, preventing the most common startup crash.

Start everything with:

docker compose up -d

Important Note About Ollama and Docker Networking

Later in this tutorial we will use Ollama to run local LLMs. If you run N8N inside Docker and Ollama on your host machine, they are on different networks by default. To allow the N8N container to reach Ollama on your host, you have two options. On Mac and Windows, use the special hostname host.docker.internal instead of localhost when configuring the Ollama URL in N8N. On Linux, add --add-host=host.docker.internal:host-gateway to your docker run command, or add it under the n8n service in your Compose file:

extra_hosts:
  - "host.docker.internal:host-gateway"

We will revisit this in Chapter 9.


PART 2: N8N FUNDAMENTALS


CHAPTER 4: YOUR FIRST WORKFLOW: NODES, CONNECTIONS, AND TRIGGERS

The best way to understand N8N is to build a simple workflow and observe how data flows through it. We will start with something concrete and progressively add complexity. Our first workflow will fetch a random piece of programming wisdom from a public API and format it into a readable message.

Understanding the Data Flow Model

Before touching the canvas, study this diagram carefully. It is the single most important concept in N8N.


Figure 2 — How Data Flows Between N8N Nodes

                        N8N DATA FLOW MODEL
                        ═══════════════════

  Every node receives an ARRAY OF ITEMS, processes them,
  and outputs an ARRAY OF ITEMS to the next node.

  ┌─────────────────────────────────────────────────────────────┐
  │                                                             │
  │   INPUT ITEMS          NODE           OUTPUT ITEMS          │
  │                                                             │
  │   [ item1,    ]                       [ item1',   ]         │
  │   [ item2,    ]  ──▶  [  NODE  ]  ──▶ [ item2',   ]         │
  │   [ item3     ]                       [ item3'    ]         │
  │                                                             │
  │   Each item is a JSON object:                               │
  │   {                                                         │
  │     json: {          ◀── your actual data lives here        │
  │       field1: "value",                                      │
  │       field2: 42                                            │
  │     }                                                       │
  │   }                                                         │
  │                                                             │
  │   A node may:                                               │
  │   • Transform items (1-in → 1-out per item)                 │
  │   • Split items   (1-in → many-out)                         │
  │   • Merge items   (many-in → 1-out)                         │
  │   • Create items  (trigger → first items in chain)          │
  │                                                             │
  └─────────────────────────────────────────────────────────────┘

  EXAMPLE: HTTP Request node receiving a single trigger item
  and producing one item containing the API response:

  Trigger ──▶ [ {json:{}} ]  ──▶  HTTP Request  ──▶  [ {json: {quote:"...", author:"..."}} ]

Creating a New Workflow

In the N8N interface, click the New Workflow button. You will see a blank canvas with a single node already placed: the Manual Trigger node. This node is your starting point. When you click the Execute Workflow button during development, N8N simulates a trigger event and starts executing from this node.

Click on the Manual Trigger node to see its configuration panel. Notice that the node has a small circle on its right side. That circle is the output connector. You connect other nodes to it by dragging from this circle to the input connector of the next node.

Adding Your First Action Node

Click the plus button that appears to the right of the Manual Trigger node, or press Tab anywhere on the canvas to open the node search panel. Type HTTP Request and select the HTTP Request node. This node appears on the canvas connected to the Manual Trigger.

Click on the HTTP Request node to open its configuration. Set the URL field to:

https://programming-quotesapi.vercel.app/api/random

Set the Method to GET. That is all the configuration this node needs. The API does not require authentication.

Running Your First Workflow

Click the Execute Workflow button. N8N will run the Manual Trigger, which immediately passes control to the HTTP Request node. After execution, you will see a green checkmark on each node that ran successfully. Click on the HTTP Request node to see the data that flowed through it. The response will look something like:

{
  "id": 42,
  "quote": "Any fool can write code that a computer can understand. Good programmers write code that humans can understand.",
  "author": "Martin Fowler"
}

Adding a Data Transformation Node

Click the plus button to the right of the HTTP Request node and search for Edit Fields. Select the Edit Fields (Set)node. Click Add Field, set the field name to formattedMessage, and enter this expression as the value:

{{ $json.author }} once said: {{ $json.quote }}

The double curly braces indicate an expression evaluated at runtime. $json refers to the JSON data of the current item flowing through the node.

Triggers: The Starting Point of Every Workflow

A workflow without a trigger is like a function that is never called. N8N provides several types of triggers:

  • Schedule Trigger — starts your workflow at specified time intervals (cron-based)
  • Webhook Trigger — turns your workflow into an HTTP endpoint triggered by external services
  • Chat Trigger — a specialized webhook trigger that provides a simple chat interface, ideal for testing AI agents
  • App Event Triggers — pre-built triggers for specific applications (GitHub, Stripe, Slack, etc.)

For AI agent workflows, the Chat Trigger and Webhook Trigger are the most commonly used. The Chat Trigger is great during development; the Webhook Trigger is how you connect your agent to production systems.


CHAPTER 5: WORKING WITH DATA: JSON, EXPRESSIONS, AND THE CODE NODE

N8N's visual interface handles many common data manipulation tasks, but real-world workflows often require custom logic. N8N gives you two powerful tools: the expression system for inline data transformations, and the Code node for writing full JavaScript or Python programs.

The Expression System

Expressions in N8N are small snippets of JavaScript-like code embedded directly in node configuration fields, enclosed in {{ }}. The most important variable is $json, which refers to the JSON data of the current item. N8N also provides:

  • $items() — returns all items from a specific node
  • $node["Node Name"].json — accesses output of any named node
  • $now — current datetime (Luxon library)
  • $today — today's date
  • $workflow — workflow metadata
  • $execution — execution metadata including a unique ID

Example of a formatted timestamp expression:

{{ $now.setZone('America/New_York').toFormat('MMMM dd, yyyy HH:mm') }}

The Code Node: JavaScript vs Python — Critical Differences

⚠️ IMPORTANT: Language-Specific Variable Names

N8N uses different variable prefixes for JavaScript and Python in Code nodes:

LanguageAccess all itemsAccess current itemDollar/Underscore
JavaScript$input.all()$input.item$ prefix
Python_input.all()_input.item_ prefix

This is one of the most common sources of confusion for developers switching between languages in N8N. The $ prefix used in JavaScript expressions becomes a _ prefix in Python Code nodes.

⚠️ WARNING: Legacy Code You May Find Online

Many older N8N tutorials (written for n8n 1.x) use these deprecated patterns that no longer work in n8n 2.0+:

// ❌ OLD (n8n 1.x only) — do NOT use:
return items[0].json.results.map(item => ({json: item}));
$return.push({json: item});

The correct modern equivalents are shown in the examples below. Always use $input.all() in JavaScript and _input.all() in Python.

The Code Node: Two Execution Modes

The Code node has two modes that affect how your code runs and what it must return:


Figure 3 — Code Node Execution Modes

  ┌─────────────────────────────────────────────────────────────────────┐
  │                    CODE NODE EXECUTION MODES                        │
  ├─────────────────────────────┬───────────────────────────────────────┤
  │  Run Once for All Items     │  Run Once for Each Item               │
  │  (DEFAULT)                  │                                       │
  ├─────────────────────────────┼───────────────────────────────────────┤
  │                             │                                       │
  │  [item1]──┐                 │  [item1] ──▶ [CODE] ──▶ [result1]     │
  │  [item2]──┼──▶ [CODE] ──▶  │  [item2] ──▶ [CODE] ──▶ [result2]      │
  │  [item3]──┘    [results]    │  [item3] ──▶ [CODE] ──▶ [result3]     │
  │                             │                                       │
  │  Your code runs ONCE        │  Your code runs ONCE PER ITEM         │
  │  Receives ALL items at once │  Receives ONE item at a time          │
  │                             │                                       │
  │  JS:  $input.all()          │  JS:  $input.item.json                │
  │  Py:  _input.all()          │  Py:  _input.item['json']             │
  │                             │                                       │
  │  Must return:               │  Must return:                         │
  │  [ {json:{...}},            │  { json: {...} }                      │
  │    {json:{...}} ]           │  (single object, NOT an array)        │
  │                             │                                       │
  └─────────────────────────────┴───────────────────────────────────────┘

JavaScript Code Node Examples

Example 1 — "Run Once for All Items": Age Calculation and Grouping

// ── Age Calculator Code Node ──────────────────────────────────────────────
// Mode: Run Once for All Items
//
// Receives a list of users, each with a 'birthdate' field (ISO string),
// and returns the same users enriched with 'age' and 'ageGroup' fields.
//
// Access pattern for this mode:
//   $input.all()  →  array of n8n items, each with a .json property

const items = $input.all();

// Configuration: age group boundaries.
// Keeping this separate from logic makes it easy to adjust.
const AGE_GROUPS = {
  youth:  { min: 0,  max: 17,  label: 'Youth (0-17)'       },
  young:  { min: 18, max: 34,  label: 'Young Adult (18-34)' },
  middle: { min: 35, max: 54,  label: 'Middle Aged (35-54)' },
  senior: { min: 55, max: 999, label: 'Senior (55+)'        },
};

// Pure helper: calculate age in whole years from a birthdate string.
function calculateAge(birthdateString) {
  const birthdate = new Date(birthdateString);
  const today     = new Date();
  let age         = today.getFullYear() - birthdate.getFullYear();

  // Adjust if the birthday has not occurred yet this calendar year.
  const monthDiff = today.getMonth() - birthdate.getMonth();
  if (monthDiff < 0 || (monthDiff === 0 && today.getDate() < birthdate.getDate())) {
    age--;
  }
  return age;
}

// Pure helper: map a numeric age to an age group label.
function getAgeGroup(age) {
  for (const group of Object.values(AGE_GROUPS)) {
    if (age >= group.min && age <= group.max) {
      return group.label;
    }
  }
  return 'Unknown';
}

// Process every item and return enriched data.
// REQUIRED FORMAT: return an array of objects, each with a 'json' key.
return items.map(item => {
  const user = item.json;                   // Access the item's data
  const age  = calculateAge(user.birthdate);

  return {
    json: {
      ...user,                              // Preserve all original fields
      age:      age,
      ageGroup: getAgeGroup(age),
    }
  };
});

Example 2 — "Run Once for Each Item": Formatting a Single Record

// ── Name Formatter Code Node ──────────────────────────────────────────────
// Mode: Run Once for Each Item
//
// In this mode the code runs once per incoming item.
// Access the current item's data directly via $input.item.json
// (or equivalently via $json in expression context).
//
// REQUIRED FORMAT: return a SINGLE object with a 'json' key (NOT an array).

const data = $input.item.json;

return {
  json: {
    ...data,
    fullName:      `${data.firstName} ${data.lastName}`,
    nameUpperCase: `${data.firstName} ${data.lastName}`.toUpperCase(),
    processed:     true,
  }
};

Example 3 — Making HTTP Requests from a Code Node

// ── GitHub Profile Enricher Code Node ────────────────────────────────────
// Mode: Run Once for All Items
//
// Fetches each user's GitHub profile and merges it with their existing data.
// Uses n8n's built-in HTTP helper (this.helpers.httpRequest) rather than
// the native fetch API, as the helper integrates with n8n's credential system.

const items = $input.all();
const enrichedUsers = [];

for (const item of items) {
  const user     = item.json;
  const username = user.githubUsername;

  // Skip users without a GitHub username gracefully.
  if (!username) {
    enrichedUsers.push({ json: { ...user, githubData: null } });
    continue;
  }

  try {
    const githubProfile = await this.helpers.httpRequest({
      method:  'GET',
      url:     `https://api.github.com/users/${username}`,
      headers: {
        'User-Agent': 'n8n-workflow',
        'Accept':     'application/vnd.github.v3+json',
      },
    });

    enrichedUsers.push({
      json: {
        ...user,
        githubData: {
          publicRepos: githubProfile.public_repos,
          followers:   githubProfile.followers,
          following:   githubProfile.following,
          createdAt:   githubProfile.created_at,
          bio:         githubProfile.bio,
        },
      }
    });

  } catch (error) {
    // One failed API call must not stop the entire batch.
    // Record the error and continue with remaining items.
    enrichedUsers.push({
      json: {
        ...user,
        githubData:  null,
        githubError: error.message,
      }
    });
  }
}

return enrichedUsers;

Python Code Node Examples

⚠️ Python in N8N — Important Limitations

  • Python support in n8n is provided via Pyodide (CPython compiled to WebAssembly). You are limited to packages bundled with Pyodide — external packages like pandas or numpy are not available unless you use a custom Docker image.
  • Input variables use the _input prefix (underscore), not $input.
  • The return format is a list of dicts with "json" keys — identical structure to JavaScript but using Python dict syntax.
  • If you encounter {"$$flags": 31} in output, use json.dumps() to serialize complex nested objects.

Example 4 — Python "Run Once for All Items": Sales Statistics

# ── Sales Statistics Code Node ────────────────────────────────────────────
# Mode: Run Once for All Items
#
# Receives sales records and returns per-category summary statistics.
#
# CORRECT input access for Python in n8n 2.0+:
#   _input.all()  →  list of all incoming items
#   item['json']  →  the actual data dict for each item
#
# REQUIRED return format:
#   A list of dicts, each with a "json" key containing your data.

from collections import defaultdict

# Retrieve all incoming items using the Python-specific _input variable.
all_items = _input.all()

# Group sales amounts by product category.
sales_by_category = defaultdict(list)

for item in all_items:
    sale     = item['json']                          # Access item data
    category = sale.get('category', 'Uncategorized')
    sales_by_category[category].append(sale['amount'])

# Calculate summary statistics for each category.
results = []

for category, amounts in sales_by_category.items():
    total   = sum(amounts)
    count   = len(amounts)
    average = total / count if count > 0 else 0

    # Each result MUST be a dict with a "json" key.
    results.append({
        "json": {
            "category":    category,
            "totalSales":  round(total,   2),
            "orderCount":  count,
            "averageSale": round(average, 2),
        }
    })

# Return the list — n8n uses this as the node's output items.
return results

Example 5 — Python "Run Once for Each Item": Field Transformation

# ── Record Normalizer Code Node ───────────────────────────────────────────
# Mode: Run Once for Each Item
#
# In this mode, _input.item gives you the current single item.
# Return a SINGLE dict with a "json" key (not a list).

current_item = _input.item['json']

normalized = {
    "id":         current_item.get('id'),
    "name":       current_item.get('name', '').strip().title(),
    "email":      current_item.get('email', '').strip().lower(),
    "processed":  True,
}

# Single object return — NOT wrapped in a list for "each item" mode.
return {"json": normalized}

CHAPTER 6: CONNECTING TO THE OUTSIDE WORLD: HTTP REQUESTS AND WEBHOOKS

Almost every interesting workflow involves communicating with external services. N8N provides two primary mechanisms: the HTTP Request node for making outgoing calls to APIs, and the Webhook Trigger for receiving incoming calls from external services.

The HTTP Request Node

The HTTP Request node can call any REST API. For authenticated APIs, create a credential in N8N's credential manager rather than hardcoding API keys. N8N handles injecting the key into the request at runtime.

Example configuration for a weather API call where the city comes dynamically from upstream data:

Method:  GET
URL:     https://api.openweathermap.org/data/2.5/weather

Query Parameters:
  q:     {{ $json.cityName }}
  units: metric

Authentication: Header Auth (select your credential)

Webhooks: Making Your Workflow Listen

A webhook is an HTTP endpoint that your workflow exposes. When an external service sends an HTTP request to that endpoint, your workflow starts.


Figure 4 — Webhook Data Structure

  INCOMING HTTP POST REQUEST
  ──────────────────────────
  POST /webhook/abc-123 HTTP/1.1
  Content-Type: application/json
  X-Signature: sha256=...

  { "event": "order.created", "orderId": "ORD-001" }

                    │
                    ▼
  ┌─────────────────────────────────────────────────────────┐
  │              WEBHOOK NODE OUTPUT ITEM                   │
  │                                                         │
  │  {                                                      │
  │    json: {                                              │
  │      headers: {                                         │
  │        "content-type": "application/json",              │
  │        "x-signature":  "sha256=..."                     │
  │      },                                                 │
  │      body: {            ◀── your JSON payload           │
  │        event:   "order.created",                        │
  │        orderId: "ORD-001"                               │
  │      },                                                 │
  │      query:  {},        ◀── URL query parameters        │
  │      params: {}         ◀── URL path parameters         │
  │    }                                                    │
  │  }                                                      │
  └─────────────────────────────────────────────────────────┘

  Access in downstream nodes:
    {{ $json.body.orderId }}   →  "ORD-001"
    {{ $json.headers['content-type'] }}  →  "application/json"

Building a Simple Webhook-Based API

Let us build a complete example: a webhook that accepts a JSON payload containing a list of numbers and returns their statistical summary.

The workflow has three nodes: a Webhook Trigger, a Code node for calculations, and a Respond to Webhook node.

// ── Statistics Calculator — Webhook Handler ───────────────────────────────
// Mode: Run Once for All Items
//
// Receives: POST body with shape { "numbers": [1, 2, 3, 4, 5] }
// Returns:  JSON with descriptive statistics
//
// The Webhook node puts the parsed request body at $json.body

const numbers = $json.body.numbers;

// ── Input validation ──────────────────────────────────────────────────────
if (!Array.isArray(numbers) || numbers.length === 0) {
  throw new Error('Request body must contain a non-empty "numbers" array.');
}

const validNumbers = numbers.filter(n => typeof n === 'number' && !isNaN(n));
if (validNumbers.length !== numbers.length) {
  throw new Error('All elements in the "numbers" array must be valid numbers.');
}

// ── Calculations ──────────────────────────────────────────────────────────
const sorted   = [...validNumbers].sort((a, b) => a - b);
const mean     = validNumbers.reduce((sum, n) => sum + n, 0) / validNumbers.length;

const midIndex = Math.floor(sorted.length / 2);
const median   = sorted.length % 2 === 0
  ? (sorted[midIndex - 1] + sorted[midIndex]) / 2
  : sorted[midIndex];

const variance = validNumbers
  .map(n => Math.pow(n - mean, 2))
  .reduce((sum, d) => sum + d, 0) / validNumbers.length;
const stdDev   = Math.sqrt(variance);

// ── Return in n8n's required format ──────────────────────────────────────
return [{
  json: {
    count:  validNumbers.length,
    min:    sorted[0],
    max:    sorted[sorted.length - 1],
    mean:   Math.round(mean   * 1000) / 1000,
    median: Math.round(median * 1000) / 1000,
    stdDev: Math.round(stdDev * 1000) / 1000,
  }
}];

Test with curl after activating the workflow:

curl -X POST http://localhost:5678/webhook/your-webhook-path \
  -H "Content-Type: application/json" \
  -d '{"numbers": [4, 8, 15, 16, 23, 42]}'

PART 3: BUILDING AGENTIC AI


CHAPTER 7: WHAT IS AGENTIC AI? THE THEORY BEHIND THE MAGIC

Before we start building AI agents in N8N, we need to understand what makes an AI system "agentic" and why that distinction matters.

A traditional AI application follows a fixed pipeline. Input goes in, the AI model processes it, output comes out. It does not make decisions about what to do next. It does not use tools. It does not remember previous interactions. It does not break complex tasks into subtasks. It is a sophisticated function: f(input) = output.

An agentic AI system is fundamentally different. It is goal-directed rather than input-directed. You give it a goal, and it figures out how to achieve that goal by reasoning, planning, taking actions, observing the results of those actions, and adjusting its approach based on what it learns.

The ReAct Loop

The most widely used framework for implementing agentic behavior is called ReAct (Reasoning and Acting). N8N's AI Agent node implements ReAct by default.


Figure 5 — The ReAct Agent Loop

  ┌─────────────────────────────────────────────────────────────────────┐
  │                     THE REACT AGENT LOOP                            │
  │                                                                     │
  │                    ┌─────────────────┐                              │
  │    USER GOAL ──▶   │   OBSERVE       │                              │
  │                    │                 │                              │
  │                    │ • Current goal  │                              │
  │                    │ • Past actions  │                              │
  │                    │ • Tool results  │                              │
  │                    │ • Memory        │                              │
  │                    └────────┬────────┘                              │
  │                             │                                       │
  │                             ▼                                       │
  │                    ┌─────────────────┐                              │
  │                    │   THINK (LLM)   │                              │
  │                    │                 │                              │
  │                    │ "I need to call │                              │
  │                    │  the weather    │                              │
  │                    │  API for Paris" │                              │
  │                    └────────┬────────┘                              │
  │                             │                                       │
  │                             ▼                                       │
  │                    ┌─────────────────┐                              │
  │                    │   ACT           │                              │
  │                    │                 │                              │
  │                    │ call_tool(      │                              │
  │                    │   "weather_api",│                              │
  │                    │   {city:"Paris"}│                              │
  │                    │ )               │                              │
  │                    └────────┬────────┘                              │
  │                             │                                       │
  │                             ▼                                       │
  │                    ┌─────────────────┐                              │
  │                    │   OBSERVE       │◀── Tool result fed back      │
  │                    │   RESULT        │                              │
  │                    │                 │    {"temp":12,"rain":true}   │
  │                    └────────┬────────┘                              │
  │                             │                                       │
  │              ┌──────────────┴──────────────┐                        │
  │              │                             │                        │
  │              ▼                             ▼                        │
  │     [Goal achieved?]              [Need more info?]                 │
  │              │                             │                        │
  │              ▼                             ▼                        │
  │     RESPOND TO USER               LOOP BACK TO THINK                │
  │                                                                     │
  └────────────────────────────────────────────────────────────────—-───┘

  The loop continues until:
  • The agent determines the goal is achieved, OR
  • A maximum iteration limit is reached (always set one!)

Tools: The Hands of an Agent

An AI agent without tools is just a chatbot. Tools are what give an agent the ability to interact with the world beyond generating text. In N8N, tools are implemented as nodes connected to the AI Agent node's Tool input ports. The agent uses each tool's description to decide when and how to call it.

Memory: The Mind of an Agent

Memory allows an agent to maintain context across multiple interactions. N8N supports several memory types:

Memory TypePersistenceBest For
Simple MemorySession only (RAM)Development, testing
Redis MemoryPersistent, fastReal-time / voice agents
Postgres MemoryPersistent, structuredProduction applications
Vector Store MemoryPersistent, semanticLong-running agents with large history

RAG: Giving Your Agent a Knowledge Base

Retrieval Augmented Generation (RAG) gives your agent access to a large body of knowledge without requiring it to fit in the model's context window. Documents are stored as vector embeddings in a database; when the agent needs information, it retrieves only the most semantically relevant chunks.


Figure 6 — RAG Architecture: Two Workflows

  ┌─────────────────────────────────────────────────────────────────────┐
  │  WORKFLOW 1: INDEXING (runs once, or periodically)                  │
  │                                                                     │
  │  [Source Data]                                                      │
  │  (Files, URLs,  ──▶  [Default    ──▶  [Text      ──▶  [Embedding    │
  │   Databases,         Data            Splitter]        Model]        │
  │   APIs)              Loader]                          (converts     │
  │                                                        text to      │
  │                                                        vectors)     │
  │                                                            │        │
  │                                                            ▼        │
  │                                                    [Vector Store]   │
  │                                                    (Qdrant,         │
  │                                                     Pinecone,       │
  │                                                     PGVector, etc.) │
  └─────────────────────────────────────────────────────────────────────┘

  ┌─────────────────────────────────────────────────────────────────────┐
  │  WORKFLOW 2: RETRIEVAL (runs on every user query)                   │
  │                                                                     │
  │  User Query                                                         │
  │      │                                                              │
  │      ▼                                                              │
  │  [AI Agent] ──uses tool──▶ [Vector Store    ──▶  [Vector Store]     │
  │      │                      Retriever Tool]       (same DB as       │
  │      │                                             above)           │
  │      │                           │                                  │
  │      │◀──── relevant chunks ─────┘                                  │
  │      │                                                              │
  │      ▼                                                              │
  │  [LLM generates answer grounded in retrieved context]               │
  │      │                                                              │
  │      ▼                                                              │
  │  Response to User                                                   │
  └─────────────────────────────────────────────────────────────────────┘

CHAPTER 8: YOUR FIRST AI AGENT: CONNECTING AN LLM TO N8N

Let us build our first AI agent: a conversational agent connected to OpenAI's GPT-4, with memory and a basic tool.

Setting Up Your OpenAI Credential

Go to the Credentials section in the left sidebar and click Add Credential. Search for OpenAI and select it. Enter your OpenAI API key and give the credential a descriptive name like OpenAI Production Key.

Building the Basic Agent Workflow


Figure 7 — AI Agent Node Connection Ports

  ┌──────────────────────────────────────────────────────────────────┐
  │                      AI AGENT NODE                               │
  │                                                                  │
  │  INPUT ──▶   ┌─────────────────────────────────────────────┐     │
  │  (from       │                                             │     │
  │   trigger)   │          AI AGENT                           │     │
  │              │                                             │     │
  │              │  System Message: [your agent's persona]     │     │
  │              │  Agent Type:     [ReAct Agent]              │     │
  │              │  Max Iterations: [10]                       │     │
  │              │                                             │     │
  │              └─────────────────────────────────────────────┘     │
  │                    │              │              │               │
  │                    ▼              ▼              ▼               │
  │             ┌────────────┐ ┌──────────┐ ┌──────────────┐         │
  │             │ Chat Model │ │  Memory  │ │    Tool(s)   │         │
  │             │  (required)│ │(optional)│ │  (optional,  │         │
  │             │            │ │          │ │  add many)   │         │
  │             │ • OpenAI   │ │ • Simple │ │              │         │
  │             │ • Ollama   │ │ • Redis  │ │ • HTTP Req.  │         │
  │             │ • Anthropic│ │ • Postgres││ • Code Node  │         │
  │             │ • Groq     │ │ • Vector │ │ • Vector     │         │
  │             │ • Mistral  │ │   Store  │ │   Retriever  │         │
  │             │ • Azure OAI│ │          │ │ • Sub-wflow  │         │
  │             └────────────┘ └──────────┘ └──────────────┘         │
  │                                                                  │
  │  OUTPUT ──▶  Agent's final text response                         │
  └──────────────────────────────────────────────────────────────────┘

Create a new workflow. Add a Chat Trigger node as your trigger. Add an AI Agent node and connect the Chat Trigger to it. Configure the AI Agent:

  • Agent Type: Tools Agent (the recommended type for tool use in n8n 2025)
  • System Message:
You are a helpful, knowledgeable, and friendly AI assistant. You answer
questions clearly and concisely. When you are not sure about something,
you say so rather than making things up. You are honest about your
limitations and always try to be genuinely helpful.

Click the plus button next to Chat Model and add an OpenAI Chat Model node. Select your OpenAI credential, choose gpt-4o as the model, and set temperature to 0.7.

Adding Memory

Click the plus button next to Memory and add a Simple Memory node. Set the Context Window Length to 10(remembers last 10 messages, i.e., 5 exchanges).

Adding Your First Tool: Current Date and Time

Click the plus button next to Tool and add a Code node as a tool. Rename it Get Current DateTime. Set its description to:

Use this tool to get the current date and time. Call this tool whenever
the user asks about the current time, today's date, what day it is,
or any question that requires knowing the current moment in time.
This tool takes no input parameters.

Write the tool code:

// ── Get Current DateTime Tool ─────────────────────────────────────────────
// Mode: Run Once for All Items
//
// This tool is called by the AI agent when the user asks about the
// current time or date. It takes no input parameters.
//
// When used as a Tool, n8n passes the agent's chosen parameters via
// $input.item.json — since this tool needs no parameters, we ignore input.

const now = new Date();

const options = {
  weekday:      'long',
  year:         'numeric',
  month:        'long',
  day:          'numeric',
  hour:         '2-digit',
  minute:       '2-digit',
  second:       '2-digit',
  timeZoneName: 'short',
};

const formattedDateTime = new Intl.DateTimeFormat('en-US', options).format(now);

return [{
  json: {
    currentDateTime: formattedDateTime,
    unixTimestamp:   Math.floor(now.getTime() / 1000),
    isoFormat:       now.toISOString(),
  }
}];

Run the workflow, open the chat interface, and ask "What time is it?" Watch the execution log to see the agent's full ReAct reasoning chain: its thought, the tool call, the observation, and the final response.


CHAPTER 9: LOCAL LLMS WITH OLLAMA: PRIVATE, FREE, AND POWERFUL

Ollama lets you run powerful open-source language models locally. Because Ollama uses the same API format as OpenAI, switching between local and cloud models in N8N requires minimal configuration changes.

Installing Ollama

Visit ollama.ai and download the installer for your operating system. After installation, Ollama runs as a background service on port 11434. Verify it is running:

http://localhost:11434

You should see: Ollama is running.

Downloading Models

# Good balance of capability and speed on consumer hardware
ollama pull llama3.2

# Smaller and faster — works well with limited RAM
ollama pull llama3.2:1b

# More capable — requires a GPU with 16 GB+ VRAM
ollama pull llama3.1:70b

# Embedding model for RAG (covered in Chapter 10)
ollama pull nomic-embed-text

# List your downloaded models
ollama list

# Test interactively in your terminal
ollama run llama3.2

Connecting Ollama to N8N

Create an Ollama credential in N8N. The only required field is the Base URL:

N8N InstallationOllama Base URL
npm (local)http://localhost:11434
Docker (Mac/Windows)http://host.docker.internal:11434
Docker (Linux)http://172.17.0.1:11434 or use extra_hosts

In your workflow, replace the OpenAI Chat Model node with an Ollama Chat Model node. Select your Ollama credential and choose your downloaded model (e.g., llama3.2).

Linux Docker Networking Fix

Add this to your n8n service in docker-compose.yml:

extra_hosts:
  - "host.docker.internal:host-gateway"

Then use http://host.docker.internal:11434 as the Ollama URL.

Performance Guide

HardwareModel SizeApproximate Speed
CPU only, 16 GB RAM7B params (llama3.2)10–20 tokens/sec
CPU only, 16 GB RAM1B params (llama3.2:1b)50–100 tokens/sec
GPU, 8 GB VRAM7B params50–100 tokens/sec
GPU, 16 GB VRAM13B params40–80 tokens/sec
GPU, 24 GB+ VRAM70B params20–40 tokens/sec

For agentic workflows requiring multiple reasoning steps, total latency multiplies. A 5-step ReAct loop with a slow model may take 30+ seconds. This is acceptable for background tasks but may feel slow for interactive chat. Choose your model accordingly.


CHAPTER 10: GIVING YOUR AGENT A MEMORY: VECTOR STORES AND RAG

Setting Up the Embedding Model

Download the Ollama embedding model:

ollama pull nomic-embed-text

In N8N, when you add an Embeddings Ollama node, select your Ollama credential and choose nomic-embed-text as the model. If using OpenAI, the Embeddings OpenAI node with text-embedding-3-small is an excellent and cost-effective choice.

Building the Indexing Workflow

Create a new workflow named Knowledge Base Indexer. This workflow runs once (or periodically) to populate the vector store.


Figure 8 — Indexing Workflow Node Chain

  ┌──────────────────────────────────────────────────────────────────────┐
  │                    INDEXING WORKFLOW                                 │
  │                                                                      │
  │  [Manual     ──▶  [Source      ──▶  [Default    ──▶  [Recursive      │
  │   Trigger]         Data]             Data             Character      │
  │                    (Code node        Loader]          Text           │
  │                     or HTTP                           Splitter]      │
  │                     Request)                                         │
  │                                                            │         │
  │                                                            ▼         │
  │                                              ┌─────────────────────┐ │
  │                                              │  Vector Store Node  │ │
  │                                              │  Operation: Insert  │ │
  │                                              │                     │ │
  │                                              │  ◀── Embeddings     │ │
  │                                              │      Ollama/OpenAI  │ │
  │                                              └─────────────────────┘ │
  └──────────────────────────────────────────────────────────────────────┘

  Text Splitter settings:
    Chunk Size:    500 characters  (try 300–800 depending on content)
    Chunk Overlap: 50 characters   (10% of chunk size is a good rule)

Add a Manual Trigger node. Add a Code node to create sample documents:

// ── Knowledge Base Document Creator ──────────────────────────────────────
// Mode: Run Once for All Items
//
// Creates documents to be indexed into the vector store.
// In production, replace this with HTTP Request nodes, Google Drive nodes,
// Notion nodes, or database queries to fetch your actual content.
//
// Each document is returned as a separate item so downstream nodes
// (text splitter, embeddings, vector store) process them individually.

const documents = [
  {
    title:   'N8N Overview',
    content: `N8N is an open-source workflow automation platform that allows
              you to connect different services and build automated workflows
              using a visual, node-based interface. It supports over 500
              integrations and can be self-hosted for complete data privacy.
              N8N was created by Jan Oberhauser and first released in 2019.
              The name stands for "nodemation" with eight letters between
              the first and last letter.`,
    source:  'n8n-overview',
  },
  {
    title:   'N8N AI Capabilities',
    content: `N8N provides first-class support for building AI agents through
              its integration with LangChain. Key AI nodes include the AI Agent
              node (which implements the ReAct pattern), Chat Model nodes for
              connecting to LLMs like OpenAI and Ollama, Memory nodes for
              maintaining conversation context, Vector Store nodes for RAG
              implementations, and Tool nodes for giving agents capabilities.
              N8N supports over 70 AI nodes in 2025.`,
    source:  'n8n-ai-capabilities',
  },
  {
    title:   'N8N Installation',
    content: `N8N can be installed using Docker or npm. The Docker installation
              is recommended for production use because it provides isolation
              and reproducibility. The npm installation is simpler and suitable
              for local development. Both methods result in N8N running on
              port 5678 by default. Node.js 18 or higher is required for npm
              installation. Docker Compose with PostgreSQL is recommended for
              production deployments.`,
    source:  'n8n-installation',
  },
];

// Return each document as a separate n8n item.
// The 'text' field is what the text splitter will chunk.
// The 'metadata' field is stored alongside the chunks in the vector store
// and can be used to filter results during retrieval.
return documents.map(doc => ({
  json: {
    text:     doc.content,
    metadata: {
      title:  doc.title,
      source: doc.source,
    },
  }
}));

Add a Default Data Loader node connected to the Code node. This prepares documents for the vector store pipeline.

Add a Recursive Character Text Splitter node connected to the Default Data Loader. Configure:

  • Chunk Size: 500
  • Chunk Overlap: 50

Add an Embeddings Ollama node (or Embeddings OpenAI). Select your credential and model.

Add an In-Memory Vector Store node. Connect the Text Splitter to the Vector Store's document input, and the Embeddings node to the Vector Store's embedding input. Set the operation to Insert.

Run this workflow to populate the vector store.

Building the RAG Agent Workflow

Create a second workflow: RAG Research Agent. Add a Chat Trigger and an AI Agent node.

Add a Vector Store Retriever node connected to the AI Agent's Tool input. Connect an In-Memory Vector Store (the same store) and an Embeddings Ollama node to the retriever. Set Top K to 3.

Give the Vector Store Retriever this description:

Use this tool to search the N8N knowledge base for information about
N8N features, installation, configuration, and capabilities. Call this
tool whenever the user asks a question about N8N. The tool accepts a
search query as input and returns relevant passages from the knowledge base.

Update the AI Agent's system message:

You are a knowledgeable N8N assistant with access to a knowledge base
about N8N. When users ask questions about N8N, use the knowledge base
search tool to find relevant information before answering. Always base
your answers on the information retrieved from the knowledge base.
If the knowledge base does not contain relevant information, say so
clearly rather than making things up.

Tip: Always use the same embedding model for both indexing and retrieval. Mixing models (e.g., indexing with nomic-embed-text but retrieving with text-embedding-3-small) will produce garbage results because the vector spaces are incompatible.


CHAPTER 11: GIVING YOUR AGENT TOOLS: HTTP REQUEST TOOLS AND CUSTOM ACTIONS

The HTTP Request Node as a Tool

Weather Tool — using the free wttr.in API (no API key required):

Method: GET
URL:    https://wttr.in/{{ $json.city }}?format=j1

Tool description:

Use this tool to get current weather information for any city in the world.
Provide the city name as the 'city' parameter (e.g., "London", "Tokyo",
"New York"). The tool returns current temperature, weather conditions,
humidity, and wind speed. Use this tool whenever the user asks about
weather, temperature, or climate conditions in a specific location.

Currency Tool — using the free exchangerate-api.com:

Method: GET
URL:    https://api.exchangerate-api.com/v4/latest/{{ $json.baseCurrency }}

Tool description:

Use this tool to get current currency exchange rates. Provide the base
currency code as the 'baseCurrency' parameter (e.g., 'USD', 'EUR', 'GBP',
'JPY'). The tool returns exchange rates from the base currency to all other
major currencies. Use this tool for any questions about currency conversion,
exchange rates, or international money values.

⚠️ DuckDuckGo API Note: The DuckDuckGo Instant Answer API (api.duckduckgo.com) is free and requires no API key, but it returns only instant answers (Wikipedia summaries, definitions, etc.) — not full web search results. It will return sparse or empty data for many queries. For real web search in production, consider SerpAPI, Brave Search API, or Tavily, all of which offer free tiers. We use DuckDuckGo in Chapter 13 for simplicity, but be aware of this limitation.

Building a Custom Calculation Tool

Add a Code node as a tool. Name it Financial Calculator. Give it this description — note that the description exactly matches the parameters the code checks:

Use this tool for financial calculations. Supported calculations:

1. Compound interest:
   - 'calculation': 'compound_interest'
   - 'principal':   initial amount (number)
   - 'rate':        annual interest rate as decimal (e.g., 0.05 for 5%)
   - 'periods':     number of years (number)
   - 'compoundsPerYear': times compounded per year (number, default 12)

2. Loan payment:
   - 'calculation': 'loan_payment'
   - 'principal':   loan amount (number)
   - 'rate':        annual interest rate as decimal (number)
   - 'periods':     loan term in months (number)

3. Investment return:
   - 'calculation': 'investment_return'
   - 'initial':     initial investment value (number)
   - 'final':       final investment value (number)
// ── Financial Calculator Tool ─────────────────────────────────────────────
// Mode: Run Once for All Items
//
// Called by the AI agent with parameters in $input.item.json.
// The agent populates these fields based on the tool description above.
//
// Returns calculation results or a descriptive error message.

const params          = $input.item.json;
const calculationType = params.calculation;

if (!calculationType) {
  return [{
    json: {
      error: 'No calculation type specified. Provide a "calculation" parameter: ' +
             'compound_interest, loan_payment, or investment_return.'
    }
  }];
}

let result;

// ── Compound Interest: A = P(1 + r/n)^(nt) ───────────────────────────────
if (calculationType === 'compound_interest') {
  const principal        = parseFloat(params.principal);
  const annualRate       = parseFloat(params.rate);
  const years            = parseFloat(params.periods);
  const compoundsPerYear = parseFloat(params.compoundsPerYear) || 12;

  if (isNaN(principal) || isNaN(annualRate) || isNaN(years)) {
    return [{ json: { error: 'compound_interest requires: principal, rate, periods' } }];
  }

  const finalAmount    = principal *
    Math.pow(1 + annualRate / compoundsPerYear, compoundsPerYear * years);
  const interestEarned = finalAmount - principal;

  result = {
    calculation:      'Compound Interest',
    principal:        principal.toFixed(2),
    annualRate:       (annualRate * 100).toFixed(2) + '%',
    years:            years,
    compoundsPerYear: compoundsPerYear,
    finalAmount:      finalAmount.toFixed(2),
    interestEarned:   interestEarned.toFixed(2),
  };

// ── Loan Payment: M = P[r(1+r)^n] / [(1+r)^n - 1] ───────────────────────
} else if (calculationType === 'loan_payment') {
  const principal   = parseFloat(params.principal);
  const annualRate  = parseFloat(params.rate);
  const months      = parseFloat(params.periods);
  const monthlyRate = annualRate / 12;

  if (isNaN(principal) || isNaN(annualRate) || isNaN(months)) {
    return [{ json: { error: 'loan_payment requires: principal, rate, periods (months)' } }];
  }

  const monthlyPayment = principal *
    (monthlyRate * Math.pow(1 + monthlyRate, months)) /
    (Math.pow(1 + monthlyRate, months) - 1);
  const totalPayment   = monthlyPayment * months;
  const totalInterest  = totalPayment - principal;

  result = {
    calculation:    'Loan Payment',
    principal:      principal.toFixed(2),
    annualRate:     (annualRate * 100).toFixed(2) + '%',
    loanTermMonths: months,
    monthlyPayment: monthlyPayment.toFixed(2),
    totalPayment:   totalPayment.toFixed(2),
    totalInterest:  totalInterest.toFixed(2),
  };

// ── Investment Return: ROI = (Final - Initial) / Initial × 100 ───────────
} else if (calculationType === 'investment_return') {
  const initial = parseFloat(params.initial);
  const final   = parseFloat(params.final);

  if (isNaN(initial) || isNaN(final)) {
    return [{ json: { error: 'investment_return requires: initial, final' } }];
  }

  const roi            = ((final - initial) / initial) * 100;
  const absoluteReturn = final - initial;

  result = {
    calculation:    'Investment Return',
    initialValue:   initial.toFixed(2),
    finalValue:     final.toFixed(2),
    absoluteReturn: absoluteReturn.toFixed(2),
    roiPercentage:  roi.toFixed(2) + '%',
  };

} else {
  result = {
    error: `Unknown calculation type: "${calculationType}". ` +
           'Supported: compound_interest, loan_payment, investment_return'
  };
}

return [{ json: result }];

Connecting Tools to External Services: GitHub Search

Create a GitHub credential in N8N (Settings → Credentials → Add → GitHub → enter your personal access token).

Add an HTTP Request node as a tool. Name it Search GitHub Repositories:

Method:          GET
URL:             https://api.github.com/search/repositories
Authentication:  GitHub (select your credential)

Query Parameters:
  q:        {{ $json.query }}
  sort:     {{ $json.sort || 'stars' }}
  order:    desc
  per_page: 5

Tool description:

Use this tool to search GitHub for repositories matching a query.
Provide 'query' as the search terms (e.g., 'machine learning python').
Optionally provide 'sort' as 'stars', 'forks', or 'updated' (default: 'stars').
Returns the top 5 matching repositories with names, descriptions, star counts,
and URLs.

Add a Code node after the HTTP Request node in the tool chain to clean up the response:

// ── GitHub Response Cleaner ───────────────────────────────────────────────
// Mode: Run Once for All Items
//
// Processes the raw GitHub search API response and extracts only the
// fields the AI agent needs. Reducing tool output size improves agent
// efficiency and reduces token consumption.

const response     = $input.item.json;
const repositories = response.items || [];

const simplifiedRepos = repositories.map(repo => ({
  name:        repo.full_name,
  description: repo.description || 'No description provided',
  stars:       repo.stargazers_count,
  forks:       repo.forks_count,
  language:    repo.language || 'Not specified',
  url:         repo.html_url,
  updatedAt:   repo.updated_at,
}));

return [{
  json: {
    totalResults: response.total_count,
    repositories: simplifiedRepos,
  }
}];

CHAPTER 12: MULTI-AGENT ORCHESTRATION: BUILDING AGENT TEAMS

Single agents are powerful, but some tasks are too complex for one agent to handle effectively. Multi-agent systems decompose complex tasks into specialized subtasks, each handled by an agent optimized for that type of work.


Figure 9 — Multi-Agent Orchestrator-Specialist Architecture

  ┌─────────────────────────────────────────────────────────────────────┐
  │                  MULTI-AGENT ARCHITECTURE                           │
  │                                                                     │
  │  USER REQUEST                                                       │
  │       │                                                             │
  │       ▼                                                             │
  │  ┌────────────────────────────────────────────────────────────┐     │
  │  │                   ORCHESTRATOR WORKFLOW                    │     │
  │  │                                                            │     │
  │  │  [Chat Trigger] ──▶ [AI Agent (Orchestrator)]              │     │
  │  │                           │                                │     │
  │  │              ┌────────────┼────────────┐                   │     │
  │  │              │            │            │                   │     │
  │  │              ▼            ▼            ▼                   │     │
  │  │    [Call n8n      [Call n8n      [Call n8n                 │     │
  │  │     Workflow       Workflow       Workflow                 │     │
  │  │     Tool]          Tool]          Tool]                    │     │
  │  │       │              │              │                      │     │
  │  └───────┼──────────────┼──────────────┼──────────────────────┘     │
  │          │              │              │                            │
  │          ▼              ▼              ▼                            │
  │  ┌──────────────┐ ┌──────────────┐ ┌──────────────┐                 │
  │  │  SPECIALIST  │ │  SPECIALIST  │ │  SPECIALIST  │                 │
  │  │  WORKFLOW 1  │ │  WORKFLOW 2  │ │  WORKFLOW 3  │                 │
  │  │              │ │              │ │              │                 │
  │  │ [Execute     │ │ [Execute     │ │ [Execute     │                 │
  │  │  Workflow    │ │  Workflow    │ │  Workflow    │                 │
  │  │  Trigger]    │ │  Trigger]    │ │  Trigger]    │                 │
  │  │      │       │ │      │       │ │      │       │                 │
  │  │  [AI Agent]  │ │  [AI Agent]  │ │  [AI Agent]  │                 │
  │  │  (Analyst)   │ │  (Writer)    │ │  (Researcher)│                 │
  │  │      │       │ │      │       │ │      │       │                 │
  │  │ [Respond to  │ │ [Respond to  │ │ [Respond to  │                 │
  │  │  Workflow]   │ │  Workflow]   │ │  Workflow]   │                 │
  │  └──────────────┘ └──────────────┘ └──────────────┘                 │
  │          │              │              │                            │
  │          └──────────────┴──────────────┘                            │
  │                         │                                           │
  │                         ▼                                           │
  │              Results aggregated by Orchestrator                     │
  │                         │                                           │
  │                         ▼                                           │
  │                  RESPONSE TO USER                                   │
  └─────────────────────────────────────────────────────────────────────┘

  KEY NODES:
  • Orchestrator calls specialists using: "Call n8n Workflow Tool" node
  • Specialists start with:              "Execute Workflow Trigger" node
  • Specialists return data with:        "Respond to Workflow" node
  • Parent workflow calls sub-workflows: "Execute Sub-workflow" node
    (for non-agent, non-tool sub-workflow calls)

Building the Data Analyst Specialist Workflow

Create a new workflow named Specialist: Data Analyst.

Add an Execute Workflow Trigger node (also shown as "When Executed by Another Workflow" on the canvas). This is the correct trigger for sub-workflows called by other workflows.

Add an AI Agent node connected to the trigger with this system message:

You are an expert data analyst specializing in statistical analysis and
data interpretation. When given data, you calculate relevant statistics,
identify patterns and trends, note any anomalies or outliers, and provide
clear, actionable insights. You are precise, methodical, and always
explain your reasoning. You output your analysis in a structured format
with clear sections for statistics, patterns, and insights.

Connect an Ollama Chat Model (or OpenAI) and a Simple Memory node (context window: 5). Add the Financial Calculator tool from Chapter 11.

At the end, add a Respond to Workflow node. This sends the agent's output back to whatever workflow called this specialist.

Building the Writer Specialist Workflow

Create a new workflow named Specialist: Writer. Add an Execute Workflow Trigger, an AI Agent with this system message:

You are a skilled technical writer who specializes in making complex
information clear and engaging. When given structured information or
analysis results, you transform them into well-organized, readable prose.
You use clear headings, smooth transitions, and concrete examples.
Your writing is professional but accessible, avoiding unnecessary jargon.
You always maintain accuracy while improving readability.

Connect a language model and memory. Add a Respond to Workflow node at the end.

Building the Orchestrator

Create a new workflow named Orchestrator: Research Assistant. Add a Chat Trigger.

Add an AI Agent with this system message:

You are an intelligent orchestrator that coordinates a team of specialist
agents to complete complex tasks. You have access to two specialists:

1. The Data Analyst: Expert at analyzing numerical data, calculating
   statistics, identifying patterns, and providing data-driven insights.
   Use this specialist when the task involves numbers, statistics, or
   data interpretation.

2. The Writer: Expert at transforming structured information into clear,
   engaging prose. Use this specialist when you need to present information
   in a readable, well-organized format.

For complex tasks, you may use multiple specialists in sequence: first
the Data Analyst to analyze data, then the Writer to present the results.
Always synthesize the specialists' outputs into a coherent final response.

Add the specialist workflows as tools using the Call n8n Workflow Tool node (not the generic "Execute Sub-workflow" node — that is for non-agent contexts). Click the plus button next to Tool on the AI Agent node and search for Call n8n Workflow Tool.

Configure the first Call n8n Workflow Tool to call the Specialist: Data Analyst workflow. Description:

Use this tool to delegate data analysis tasks to the Data Analyst specialist.
Provide the data to analyze and the specific analysis question as the input.
The specialist will return statistical analysis, pattern identification,
and data-driven insights. Use this tool for any task involving numerical
data, statistics, or quantitative analysis.

Configure the second Call n8n Workflow Tool to call the Specialist: Writer workflow. Description:

Use this tool to delegate writing tasks to the Writer specialist.
Provide the information to be written up and any specific formatting
or style requirements. The specialist will return well-organized,
clear prose. Use this tool when you need to present complex information
in a readable format, or after getting analysis from the Data Analyst.

Add a language model and memory to the orchestrator. Test with: "I have sales data: January $45,000, February $52,000, March $48,000, April $61,000, May $58,000, June $67,000. Can you analyze the trend and write me a professional summary for my board presentation?"

The Planner-Executor-Critic Loop


Figure 10 — Planner-Executor-Critic Pattern

  ┌─────────────────────────────────────────────────────────────────────┐
  │               PLANNER-EXECUTOR-CRITIC LOOP                          │
  │                                                                     │
  │  USER GOAL                                                          │
  │      │                                                              │
  │      ▼                                                              │
  │  [PLANNER Agent]  ──▶  Detailed task plan                           │
  │      │                                                              │
  │      ▼                                                              │
  │  [EXECUTOR Agent] ──▶  Executes plan using tools  ──▶  Draft output │
  │      │                                                              │
  │      ▼                                                              │
  │  [CRITIC Agent]   ──▶  Reviews output against goal                  │
  │      │                                                              │
  │      ├── APPROVED ──▶  Return final output to user                  │
  │      │                                                              │
  │      └── REJECTED ──▶  Feedback to Executor ──▶  [EXECUTOR again]   │
  │                              │                                      │
  │                         (loop repeats up to MAX_ITERATIONS)         │
  │                                                                     │
  │  ⚠️  Always set a maximum iteration limit (3-5 is usually enough)   │
  │     to prevent infinite loops and runaway API costs.                │
  └─────────────────────────────────────────────────────────────────────┘

This pattern adds a quality control layer. The Planner creates a task plan, the Executor carries it out using tools, and the Critic reviews the result. If the Critic is not satisfied, the Executor tries again with the Critic's feedback. The maximum iteration limit is critical — without it, a misconfigured system could loop indefinitely consuming API tokens.


CHAPTER 13: A COMPLETE AGENTIC AI PROJECT: THE RESEARCH ASSISTANT

Let us bring everything together in a complete, production-quality agentic AI project: a Research Assistant that can search the web, query a knowledge base, perform calculations, and present findings in structured reports.

Project Architecture


Figure 11 — Complete Research Assistant Architecture

  ┌─────────────────────────────────────────────────────────────────────┐
  │                RESEARCH ASSISTANT — FULL ARCHITECTURE               │
  │                                                                     │
  │  WORKFLOW 1: Knowledge Base Indexer (runs periodically)             │
  │  ─────────────────────────────────────────────────────              │
  │  [Schedule] ──▶ [Fetch Docs] ──▶ [Data Loader] ──▶ [Splitter]       │
  │                                                         │           │
  │                                              [Embeddings] ──▶ [Vector DB] │
  │                                                                     │
  │  WORKFLOW 2: Research Agent (main workflow)                         │
  │  ─────────────────────────────────────────                          │
  │                                                                     │
  │  [Chat Trigger]                                                     │
  │       │                                                             │
  │       ▼                                                             │
  │  [AI Agent (Research Assistant)]                                    │
  │       │                                                             │
  │       ├──[Tool]──▶ [Web Search (DuckDuckGo HTTP)]                   │
  │       │                 ──▶ [Response Cleaner Code]                 │
  │       │                                                             │
  │       ├──[Tool]──▶ [Vector Store Retriever]                         │
  │       │                 ──▶ [Vector DB] + [Embeddings]              │
  │       │                                                             │
  │       ├──[Tool]──▶ [Financial Calculator Code]                      │
  │       │                                                             │
  │       └──[Tool]──▶ [Call n8n Workflow Tool]                         │
  │                         ──▶ WORKFLOW 3 (Report Formatter)           │
  │                                                                     │
  │  WORKFLOW 3: Report Formatter Specialist                            │
  │  ────────────────────────────────────────                           │
  │  [Execute Workflow Trigger] ──▶ [Format Code] ──▶ [Respond to Wf]   │
  └─────────────────────────────────────────────────────────────────────┘

Building the Web Search Tool

⚠️ Reminder: The DuckDuckGo Instant Answer API returns limited results — instant answers and related topics only, not full web search results. It will return empty data for many specific queries. This is acceptable for a tutorial example. For production, use SerpAPI, Brave Search API, or Tavily.

Add an HTTP Request node as a tool. Name it Web Search:

Method: GET
URL:    https://api.duckduckgo.com/

Query Parameters:
  q:              {{ $json.query }}
  format:         json
  no_html:        1
  skip_disambig:  1

Tool description:

Use this tool to search the web for information on any topic.
Provide a clear, specific search query as the 'query' parameter.
Returns instant answers and related topics. Use this tool when the user
asks about facts or information that may not be in the knowledge base.
Note: this tool returns summary information, not full web pages.
Prefer specific, well-formed queries for best results.

Add a Code node after the HTTP Request to clean the response:

// ── DuckDuckGo Response Processor ────────────────────────────────────────
// Mode: Run Once for All Items
//
// Extracts the most useful fields from the DuckDuckGo Instant Answer API
// response. Note: this API returns instant answers (Wikipedia summaries,
// definitions) rather than full web search results.

const response = $input.item.json;

const searchResult = {
  query:         response.QueryVerbatim || '',
  instantAnswer: response.AbstractText  || '',
  answerType:    response.Type          || '',
  source:        response.AbstractSource || '',
  sourceUrl:     response.AbstractURL   || '',
};

// Include up to 5 related topics if available.
if (response.RelatedTopics && response.RelatedTopics.length > 0) {
  searchResult.relatedTopics = response.RelatedTopics
    .filter(topic => topic.Text)
    .slice(0, 5)
    .map(topic => ({
      text: topic.Text,
      url:  topic.FirstURL || '',
    }));
}

if (!searchResult.instantAnswer) {
  searchResult.note =
    'No instant answer available for this query. ' +
    'Try a more specific search term, or check the knowledge base.';
}

return [{ json: searchResult }];

Building the Report Formatter Specialist

Create a new workflow named Specialist: Report Formatter. Add an Execute Workflow Trigger.

Add a Code node for formatting:

// ── Report Formatter Specialist ───────────────────────────────────────────
// Mode: Run Once for All Items
//
// Receives research findings from the orchestrator and formats them into
// a structured, consistently styled report.
//
// Expected input fields (from the orchestrator agent):
//   topic:    (string) the research question or topic title
//   findings: (string) the main research content
//   sources:  (array, optional) list of source names or URLs

const input    = $input.item.json;
const topic    = input.topic    || 'Research Report';
const findings = input.findings || '(No findings provided)';
const sources  = Array.isArray(input.sources) ? input.sources : [];

const timestamp = new Date().toLocaleDateString('en-US', {
  year:  'numeric',
  month: 'long',
  day:   'numeric',
});

// Build the report as a formatted string.
// Consistent structure makes reports easy to scan and professional.
let report = '';
report += `RESEARCH REPORT\n`;
report += `${'='.repeat(60)}\n\n`;
report += `Topic:     ${topic}\n`;
report += `Generated: ${timestamp}\n\n`;
report += `FINDINGS\n`;
report += `${'─'.repeat(40)}\n\n`;
report += `${findings}\n\n`;

if (sources.length > 0) {
  report += `SOURCES\n`;
  report += `${'─'.repeat(40)}\n\n`;
  sources.forEach((source, index) => {
    report += `${index + 1}. ${source}\n`;
  });
  report += '\n';
}

report += `${'='.repeat(60)}\n`;
report += `[End of Report]\n`;

return [{
  json: {
    formattedReport: report,
    topic:           topic,
    generatedAt:     new Date().toISOString(),
  }
}];

Add a Respond to Workflow node at the end.

The Complete Research Agent Workflow

Create a new workflow named Research Assistant. Add a Chat Trigger.

Add an AI Agent with this system message:

You are an advanced research assistant with access to a knowledge base,
web search, and analytical tools. Your goal is to provide thorough,
accurate, and well-organized answers to research questions.

When answering questions, follow this process:
1. First, search your knowledge base for relevant information.
2. If the knowledge base does not have sufficient information,
   use web search to find current, accurate data.
3. For questions involving numbers or financial data, use the
   financial calculator tool for precise calculations.
4. When you have gathered sufficient information, use the report
   formatter to present your findings clearly.

Always cite your sources. If you are uncertain about something,
say so. Prefer accuracy over comprehensiveness: a shorter, accurate
answer is better than a longer, uncertain one.

Never make up facts. If you cannot find reliable information,
tell the user what you could and could not find, and suggest
how they might find the missing information themselves.

Connect all tools to the AI Agent's Tool inputs:

  • Web Search HTTP Request + Cleaner Code (chained)
  • Vector Store Retriever (from Chapter 10)
  • Financial Calculator Code (from Chapter 11)
  • Call n8n Workflow Tool → Report Formatter workflow

Connect an Ollama Chat Model and a Simple Memory node (context window: 10).

Testing the Complete System

Activate all workflows. Run the Knowledge Base Indexer first. Then test the Research Assistant with progressively complex queries:

Simple (knowledge base):

"How do I install N8N using Docker?"

Requires web search:

"What is the current version of Llama 3 and what are its capabilities?"

Multi-tool (calculator + formatter):

"I want to invest $10,000 for 5 years at 6% annual interest compounded monthly. Calculate my returns and write me a brief investment summary."

Multi-specialist (analysis + writing):

"Here are my Q1 sales: January $45,000, February $52,000, March $48,000. Analyze the trend and write a board-ready summary."

Monitoring and Debugging

The execution log is your primary debugging tool. For each execution, click the AI Agent node to see the full ReAct reasoning chain. Common issues and their causes:

SymptomMost Likely CauseFix
Agent ignores a toolTool description too vagueMake description more explicit about when to use it
Agent uses wrong toolOverlapping tool descriptionsDifferentiate descriptions more clearly
RAG returns irrelevant chunksWrong embedding model, or chunk size mismatchUse same model for index and retrieval; adjust chunk size
Sub-workflow not calledWrong node type usedEnsure you used "Call n8n Workflow Tool" (not "Execute Sub-workflow") for agent tool use
Python code failsUsing items instead of _input.all()Update to n8n 2.0+ variable names
Infinite agent loopNo max iteration limitSet max iterations in AI Agent node settings

APPENDIX A: ENVIRONMENT VARIABLES REFERENCE

VariablePurposeExample
N8N_ENCRYPTION_KEYEncrypts stored credentials. Generate with openssl rand -hex 32. Keep secret — losing it means losing access to all stored credentials.a1b2c3...
WEBHOOK_URLBase URL for generated webhook URLs. Set to your public URL when behind a reverse proxy.https://n8n.mycompany.com/
DB_TYPEDatabase backend. Must be exactly postgresdb for PostgreSQL (not postgres).postgresdb
DB_POSTGRESDB_HOSTPostgreSQL hostname. Use the Docker service name when running in Compose.postgres
DB_POSTGRESDB_DATABASEPostgreSQL database name.n8n
DB_POSTGRESDB_USERPostgreSQL username.n8n
DB_POSTGRESDB_PASSWORDPostgreSQL password.your_secure_password
GENERIC_TIMEZONETimezone for scheduled workflows.America/New_York
EXECUTIONS_DATA_PRUNEAuto-delete old execution logs to prevent database bloat.true
EXECUTIONS_DATA_MAX_AGEMaximum age in hours of execution data to keep when pruning.720 (30 days)
NODE_FUNCTION_ALLOW_EXTERNALnpm packages allowed in Code nodes. Use * for all, or comma-separated list.axios,lodash
N8N_PAYLOAD_SIZE_MAXMaximum webhook payload size in MB.16
QUEUE_BULL_REDIS_HOSTRedis hostname for queue mode (horizontal scaling).redis

APPENDIX B: TROUBLESHOOTING COMMON ISSUES

Agent not using tools The most common cause is an unclear tool description. Make the description more explicit about when the tool should be used and include examples of queries that should trigger it. Also verify the tool node is connected to the AI Agent's Tool input port, not another port.

Ollama connection errors in Docker Verify you are using host.docker.internal instead of localhost in the Ollama credential's Base URL. On Linux, add extra_hosts: ["host.docker.internal:host-gateway"] to your n8n service in docker-compose.yml. Confirm Ollama is running on the host with curl http://localhost:11434.

Vector store returning irrelevant results Check that you are using the same embedding model for both indexing and retrieval — mixing models produces incompatible vectors. Try smaller chunk sizes (200–400 characters). Increase the Top K value to retrieve more candidates. Review document quality: poorly written source material produces poor embeddings.

Python Code node errors The most common error is using the old n8n 1.x variable items instead of _input.all(). Update all Python Code nodes to use _input.all() for "Run Once for All Items" mode and _input.item for "Run Once for Each Item" mode. Ensure your return format is [{"json": {...}}] — a list of dicts with a "json" key.

JavaScript Code node errors Ensure you return [{ json: {...} }] — an array of objects with json keys — in "Run Once for All Items" mode. In "Run Once for Each Item" mode, return a single object { json: {...} } without the array wrapper.

Sub-workflow not receiving calls Verify the specialist workflow starts with an Execute Workflow Trigger node (not a Webhook or Chat Trigger). Verify the orchestrator uses the Call n8n Workflow Tool node (for agent tool use) or the Execute Sub-workflow node (for non-agent sub-workflow calls). These are different nodes with different purposes.

Memory not persisting across sessions Simple Memory only persists for the duration of one N8N process session. If you restart N8N, Simple Memory is cleared. Switch to Postgres Chat Memory or Redis Memory for persistence across restarts.

Workflow executions timing out The most likely cause is a slow LLM response, especially with large local models. Increase the execution timeout in N8N's settings. For production systems, consider using N8N's queue mode and configuring appropriate timeout values. Switching to a smaller, faster model for development can significantly speed up iteration.

n8n fails to start with PostgreSQL in Docker Compose Ensure depends_on uses condition: service_healthy (not just depends_on: postgres). Without the health check condition, n8n may start before PostgreSQL is ready to accept connections. Verify your DB_TYPE is set to postgresdb (not postgres).


╔══════════════════════════════════════════════════════════════════════════════╗
║                              CONCLUSION                                      ║
╚══════════════════════════════════════════════════════════════════════════════╝

You have traveled a long way in this tutorial. You started with the concept of a workflow engine and ended with a complete, multi-agent AI research assistant that can search the web, query a knowledge base, perform calculations, and present findings in structured reports. Along the way, you learned how N8N works at a fundamental level, how to connect it to both cloud and local LLMs, how to build and configure AI agents with memory and tools, and how to orchestrate multiple agents into collaborative systems.

The corrections made in this edition matter in practice. Using _input.all() instead of items in Python Code nodes is the difference between code that runs and code that silently fails in n8n 2.0+. Using the Call n8n Workflow Tool node (not the generic Execute Sub-workflow node) for agent tool use is the difference between a working multi-agent system and one that never delegates. Using depends_on with condition: service_healthy in Docker Compose is the difference between a reliable production deployment and one that crashes on startup half the time.

The field of agentic AI is moving extraordinarily fast. New models, new techniques, and new tools are emerging constantly. But the fundamental concepts you have learned here — the ReAct loop, tool use, memory management, RAG, and multi-agent orchestration — are stable foundations that will remain relevant regardless of which specific models or APIs are popular at any given moment.

The best way to continue learning is to build things. Take the Research Assistant from Chapter 13 and extend it: add a new tool, connect it to a real database, give it a new specialist agent. Break things, debug them, and learn from what you discover. The N8N community forum and the official documentation are excellent resources when you get stuck.

The era of agentic AI is just beginning, and you now have the knowledge and tools to be part of building it. Go make something remarkable.

No comments: