We run n8n on a $12/month VPS handling 50,000+ executions per month for four clients. Total monthly cost including all API calls: $180. The equivalent Zapier bill for those workflows would exceed $2,400/month. That is the economic case for n8n in one paragraph.
But cost is not the only reason we chose n8n over Zapier or Make. n8n has first-class AI nodes, native LangChain integration, a Code node that runs real JavaScript, and no per-execution pricing when self-hosted. For AI-heavy automation workflows that call LLMs multiple times per execution, the per-execution pricing model of cloud platforms becomes prohibitive very quickly.
This tutorial starts from zero and ends with five production-ready AI automation workflows running on infrastructure you control.
Why n8n Over Zapier and Make for AI Workflows
The short version: Zapier and Make were designed for simple linear automations in the API-integration era. n8n was designed for complex, branching, code-enabled workflows — and its AI nodes were added by engineers who actually understand how LLM pipelines work.
- No per-execution cost when self-hosted. Zapier charges $0.006–$0.05 per task depending on plan. A single AI workflow that makes 5 API calls counts as 5 tasks on Zapier. An n8n workflow that makes 50 API calls per execution costs the same as one that makes 1.
- Native AI Agent node. n8n's AI Agent node wraps LangChain's agent executor. You configure a system prompt, attach tools (web search, calculator, HTTP request, custom functions), and the agent reasons over multiple steps automatically. This is not available in Zapier at all and only partially in Make.
- Code node with real JavaScript. When the visual nodes cannot do what you need, drop into a Code node with full Node.js. Array manipulation, complex string parsing, cryptographic operations, custom data transformations — all available without leaving n8n.
- LangChain integration. Vector store retrieval, memory buffers, document loaders, and output parsers are all available as native n8n nodes. You can build a complete RAG pipeline in n8n without writing a single line of code.
Installation Options
Option 1: n8n Cloud ($20/month starter)
Sign up at n8n.io, create an account, and you have a working n8n instance in 2 minutes. The $20/month starter plan includes 2,500 workflow executions. Each execution can run unlimited operations. For low-volume use cases or teams not comfortable with server management, this is the right starting point.
The $20 plan becomes expensive when you hit 2,500 executions/month. At that point, self-hosting on a $6–$12/month VPS is significantly more economical.
Option 2: Docker Self-Hosted (Our Recommended Approach)
Self-hosting n8n with Docker gives you unlimited executions, full data control, and the ability to install custom nodes. We run this on Hetzner Cloud (European provider, best price-to-performance ratio) at $4.51/month for a CX22 instance (2 vCPU, 4GB RAM, 40GB SSD) or $6/month on DigitalOcean for a comparable Droplet.
Option 3: Railway or Render One-Click Deploy
Railway and Render both offer n8n deployment templates. Click deploy, add environment variables, and you have a running instance in 5 minutes. Railway starts at $5/month, Render at $7/month for the instance size needed. These are good middle grounds between managed (n8n Cloud) and fully self-managed (VPS Docker).
Step-by-Step Docker Setup on a Hetzner VPS
This is the setup we use in production. Follow these steps exactly and you will have a hardened n8n instance running with PostgreSQL (not SQLite) and HTTPS.
Step 1: Provision the Server
Create a Hetzner CX22 instance running Ubuntu 24.04. During setup, add your SSH public key. Point a subdomain (e.g., n8n.yourdomain.com) to the server's IP address. Allow ports 22, 80, and 443 in the Hetzner firewall.
Step 2: Install Docker
- SSH into the server: ssh root@your-server-ip
- Run: curl -fsSL https://get.docker.com | sh
- Verify: docker --version (should show 26.x or later)
- Install Compose plugin: apt install docker-compose-plugin
Step 3: Create docker-compose.yml
Create a directory: mkdir /opt/n8n && cd /opt/n8n. Create docker-compose.yml with the following services:
- postgres: postgres:16-alpine image, database n8n, credentials from environment variables, volume mounted to /var/lib/postgresql/data
- n8n: n8nio/n8n:latest image, depends_on postgres, environment variables for DB connection, WEBHOOK_URL set to your subdomain with HTTPS, volume mounted to /home/node/.n8n, ports 5678:5678
- caddy: caddy:alpine image for automatic HTTPS, Caddyfile that reverse-proxies n8n.yourdomain.com to n8n:5678
The key environment variables for n8n: N8N_HOST, N8N_PORT=5678, N8N_PROTOCOL=https, NODE_ENV=production, WEBHOOK_URL=https://n8n.yourdomain.com, DB_TYPE=postgresdb, DB_POSTGRESDB_HOST=postgres, DB_POSTGRESDB_DATABASE=n8n, and credentials.
Step 4: Start and Verify
- Run: docker compose up -d
- Check logs: docker compose logs -f n8n
- Wait for "Editor is now accessible via" message
- Visit https://n8n.yourdomain.com and complete the setup wizard
Workflow 1: Email Classifier and Auto-Responder
This workflow monitors a Gmail inbox, classifies incoming emails by topic and urgency, and drafts responses for Tier 1 queries automatically.
Nodes in Order
- Gmail Trigger: Polls every 5 minutes for new emails in a specific label (e.g., "Inbox/Unprocessed"). Returns email subject, body, sender, and thread ID.
- OpenAI node (classify): System prompt: "Classify this email as one of: [support-question, billing-inquiry, spam, partnership, complaint, other]. Return JSON with fields: category, urgency (1-3), can_auto_respond (boolean)."
- IF node: Branches on can_auto_respond = true.
- OpenAI node (draft): Only reached if can_auto_respond = true. System prompt references your knowledge base (injected as context), drafts a response.
- Gmail node (create draft): Creates a draft reply (does NOT send automatically). Human reviews and sends.
- Slack node: Sends a notification to your team Slack channel with the email summary and classification. If urgency = 3, pings the on-call channel.
Important design decision: the workflow creates a draft, not an auto-send. We recommend this for all email workflows until you have run 2–4 weeks of quality review. The cost to draft but not send is essentially zero. The cost of sending a bad AI response to a customer is significant.
Workflow 2: Lead Enrichment Pipeline
Triggers when a new lead arrives in your CRM, enriches the lead with company data, and generates an AI-written personalization brief for your sales team.
Nodes in Order
- Webhook node: Receives POST from your CRM (HubSpot, Salesforce, or custom) when a new lead is created. Payload includes name, email, company, source.
- HTTP Request node (Clearbit): Calls Clearbit Enrichment API ($99/month for 250 lookups, or free tier at 25/month) with the email address. Returns company size, industry, technologies used, funding stage.
- OpenAI node (summarize): Takes the raw Clearbit data and generates a 3-sentence personalization brief: company context, likely pain points based on industry/size, suggested talking points.
- HTTP Request node (CRM update): Updates the lead record in your CRM with the enrichment data and AI summary via API.
- Slack node: Posts the lead summary to your sales Slack channel with a direct link to the CRM record.
We run this for a B2B SaaS client. Response time from lead creation to enriched CRM record: 8 seconds. Previously, a sales coordinator did this manually (3–5 minutes per lead) before handing off to the rep. The workflow saves 2 hours/day for that coordinator.
Workflow 3: Content Repurposing Pipeline
Monitors an RSS feed (your blog, industry publications, or competitor blogs), rewrites content for LinkedIn, and queues posts via Buffer.
Nodes in Order
- RSS Feed Trigger: Polls your blog RSS every hour. Triggers on new items only.
- HTTP Request node: Fetches the full article content from the URL (the RSS often only contains a summary). Use the URL from the RSS item.
- Code node: Strips HTML tags from the fetched content, extracts the main body text. JavaScript: document body parsing with regex.
- OpenAI node: System prompt: "Rewrite this blog post as a LinkedIn post. Maximum 1,300 characters. Include one key insight, one data point or specific example, and a question to drive engagement. Do not use hashtags excessively — maximum 3."
- Buffer node (or HTTP Request to Buffer API): Creates a scheduled post for LinkedIn. Schedules 24 hours from trigger time.
- Slack node: Notifies the content team with the drafted post for optional review.
Workflow 4: Meeting Notes Processor
Detects when a Google Meet or Zoom recording is ready, transcribes it, generates a structured summary, and distributes to relevant stakeholders.
Nodes in Order
- Google Calendar Trigger: Fires when a meeting ends (event end time passes).
- HTTP Request node (Zoom/Meet API): Checks if a recording is available for the meeting. Polls with retry if not yet available.
- HTTP Request node (download): Downloads the audio recording to n8n's temp storage.
- OpenAI node (Whisper transcription): Calls the Whisper API with the audio file. At $0.006/minute of audio, a 60-minute meeting costs $0.36 to transcribe.
- OpenAI node (summarize): Takes the transcript and generates: meeting title, attendees, 5 bullet point summary, decisions made, and action items with owners and due dates.
- Notion node: Creates a new page in your Meetings database with the full transcript and structured summary.
- Slack node: Posts the structured summary (not the full transcript) to the project channel with a link to the Notion page.
Workflow 5: Customer Feedback Analyzer
Processes incoming survey responses or form submissions, classifies sentiment, and triggers immediate escalation for negative feedback.
Nodes in Order
- Webhook node: Receives POST from Typeform, Jotform, or your custom form. Payload includes response text, customer email, and submission timestamp.
- OpenAI node (analyze): Returns JSON with: sentiment (positive/neutral/negative), sentiment_score (1-10), key_themes (array of strings), requires_followup (boolean), urgency (1-3).
- IF node: Branches on sentiment = negative OR urgency >= 2.
- Airtable node: Logs all responses to an Airtable base with the AI analysis fields. Useful for trend analysis over time.
- Slack node (negative path only): Immediate alert to the customer success channel with the response text, sentiment score, and customer email. Includes a direct reply link.
- Gmail node (negative path only): Drafts a personal follow-up email to the customer from the CS manager's address.
n8n AI Nodes: What They Actually Do
AI Agent Node
This is n8n's most powerful feature and the most misunderstood. The AI Agent node implements a ReAct (Reasoning + Acting) loop. You give it a goal, attach tools, and it decides what tools to call in what order to achieve the goal. Unlike a simple OpenAI chat node, the Agent can take multiple steps, use tool outputs to inform subsequent steps, and self-correct when tools return unexpected results.
Practical use: "Research this company and create a 5-point briefing." The agent might call a web search tool, then a data formatting tool, then an OpenAI summarization call — autonomously deciding the sequence.
LangChain Memory Nodes
Window Buffer Memory stores the last N turns of a conversation. Redis Chat Memory persists across sessions using a session ID. Use these when building multi-turn conversation workflows where context from earlier in the conversation affects later responses.
Vector Store Retrieval Nodes
n8n supports Supabase, Pinecone, Weaviate, Qdrant, and in-memory vector stores. The workflow pattern: embed query → retrieve from vector store → pass retrieved context to OpenAI node. This is a visual RAG pipeline that requires zero code.
Error Handling and Production Hardening
Error Handling in Workflows
Every production workflow should have an Error Trigger workflow set in n8n settings. This workflow fires when any other workflow fails and should: log the error to your monitoring system, send a Slack notification with the workflow name, execution ID, and error message, and optionally retry the failed execution after a delay.
API Rate Limiting
Add Wait nodes between batched API calls. OpenAI's rate limits vary by model and tier — GPT-4o-mini has generous limits, but embedding calls can hit token-per-minute limits on high-volume workflows. Add a 1-second Wait node between batches of 10 embedding calls to stay well within limits.
Webhook Security
All webhook nodes should validate the request source. For Zapier-style connections, use header authentication with a secret key. For Stripe/GitHub webhooks, validate the signature using a Code node (they provide HMAC SHA256 signature verification documentation).
Backups
n8n workflow definitions are stored as JSON in the database. Export all workflows via the n8n API or UI weekly. Store exports in a Git repository — this gives you version history of your automation logic, which is invaluable when debugging regressions.
Real Cost Breakdown
Here is the actual monthly breakdown for our shared n8n instance running 4 clients' automation workflows:
- Hetzner CX32 VPS (4 vCPU, 8GB RAM): $12.49/month
- Hetzner Object Storage (backups): $1.20/month
- OpenAI API costs (across all workflows): ~$95/month
- Clearbit API (lead enrichment): $99/month
- Buffer API (content scheduling): $0 (team plan already paid)
- Total: ~$208/month for 50,000+ executions
The Zapier equivalent (10 zaps × 5 tasks each × 50,000 triggers/month) would require the Zapier Business plan at $599/month, plus API costs, plus overage charges. The savings fund two additional services we provide these clients.
Where to Go Next
Start with one workflow — the email classifier is the easiest to implement and delivers immediate value. Get comfortable with the n8n interface, the node configuration patterns, and the error handling before building complex multi-step AI agent workflows.
For a deeper comparison of n8n, Zapier, and Make across 20+ evaluation criteria, read our n8n vs Zapier vs Make deep dive. For a detailed breakdown of n8n's pricing tiers and when the free self-hosted version makes sense, see our n8n pricing guide. To explore how we deploy n8n as part of broader AI automation strategies, read our n8n tool overview.