View all articles
n8nmcpai-agentstutorialautomation

n8n + MCP: Connect Your AI Agents to Any Business Tool

JG
Jacobo Gonzalez Jaspe
|

n8n + MCP: Connect Your AI Agents to Any Business Tool

The Model Context Protocol (MCP) solved a problem that held back AI agents for years: how does an AI model talk to your tools? Before MCP, every integration was custom — one connector for Slack, another for your database, another for your CRM. Each broke independently, required separate maintenance, and locked you into a specific AI provider.

MCP changed that. With 97 million installs as of March 2026, it’s now the universal standard adopted by Claude, ChatGPT, Gemini, Copilot, VS Code, and — critically for automation — n8n.

This tutorial shows you how to combine n8n’s workflow engine with MCP to build AI agents that use your actual business tools.

n8n MCP integration

What MCP Does (30-Second Version)

MCP is a standard protocol that lets AI models discover and use external tools. Think of it as USB for AI — plug in a tool, and any AI model can use it.

flowchart LR
    AI["AI Model<br/>(Ollama, Claude, GPT)"]
    MCP["MCP Protocol"]
    AI <--> MCP
    MCP <--> DB["Database"]
    MCP <--> CRM["CRM"]
    MCP <--> DOCS["Documents"]
    MCP <--> API["Your APIs"]
    MCP <--> LMS["Docebo LMS"]
    
    style AI fill:#F5A623,color:#0B1628
    style MCP fill:#059669,color:#FAFAFA

Without MCP, you write custom integration code for each tool + each AI model. With MCP, you write one MCP server per tool, and every AI model can use it.

Why n8n + MCP Is Powerful

n8n brings three things MCP alone doesn’t provide:

  1. Workflow orchestration: MCP connects AI to tools. n8n orchestrates multi-step workflows that use those connections — “fetch the document, analyze it, update the CRM, notify the team.”
  2. Visual design: Build agent workflows by dragging nodes, not writing code.
  3. Self-hosted: Your n8n instance runs locally. Combined with Ollama for inference, the entire pipeline stays on your hardware.

Building an MCP-Powered Agent Workflow

Prerequisites

  • n8n installed (self-hosted recommended)
  • Ollama running with a capable model (qwen2.5:7b or better)
  • An MCP server for your tool (see “Available MCP Servers” below)

Step 1: Set Up an MCP Server

MCP servers expose your tools to AI models. Here’s an example using the filesystem MCP server (lets AI read/search your documents):

# Install the filesystem MCP server
npx @anthropic-ai/mcp-server-filesystem /path/to/your/documents

This starts a server that exposes file reading, searching, and listing as MCP tools. Your AI agent can now ask “find all contracts mentioning renewal clauses” and the MCP server translates that into filesystem operations.

Step 2: Create the n8n Workflow

In n8n, build a workflow with these nodes:

  1. Webhook Trigger — receives queries from users or other systems
  2. AI Agent Node — connects to Ollama + MCP tools
  3. Code Node — processes the agent’s response
  4. Output Node — sends results (email, Slack, webhook response)

The AI Agent node is where MCP integration happens. Configure it with:

  • Model: Your Ollama endpoint (http://localhost:11434)
  • Tools: Your MCP server endpoints
  • System prompt: Define the agent’s role and tool usage instructions

Step 3: Define the Agent’s Behavior

The system prompt determines how your agent uses MCP tools:

You are a document analyst for a law firm. You have access to the firm's 
document repository via MCP tools. When asked a question:

1. Search for relevant documents using the search tool
2. Read the most relevant documents
3. Analyze the content and provide a sourced answer
4. Always cite the document name and section

Never make up information. If you can't find relevant documents, say so.

Step 4: Test and Deploy

Run a test query through the webhook:

curl -X POST http://localhost:5678/webhook/mcp-agent \
  -H "Content-Type: application/json" \
  -d '{"query": "What are the payment terms in the Martinez contract?"}'

The agent will: receive the query → search documents via MCP → read relevant files → analyze with Ollama → return a sourced answer. All locally.

Available MCP Servers (10,000+)

The MCP ecosystem now has 10,000+ public servers. Key ones for businesses:

MCP ServerWhat It DoesUse Case
FilesystemRead/search local filesDocument Q&A, contract analysis
PostgreSQLQuery databasesBusiness intelligence, reporting
SlackSend/read messagesTeam notifications, escalations
Google DriveAccess cloud docsShared document analysis
GitHubManage repos, issues, PRsDevelopment workflow automation
NotionRead/write pagesKnowledge base management
StripeManage paymentsInvoice processing, reconciliation
DoceboLMS integrationTraining analytics, learner progress

Docebo’s April 2026 MCP integration is particularly relevant — it makes your LMS a knowledge source for AI assistants, enabling agents to answer training-related questions from actual course content.

Real Example: Document Intelligence Agent

Here’s a complete workflow we deploy for law firms and consulting businesses:

flowchart TD
    QUERY["User Query<br/>'What are the liability caps<br/>in our vendor contracts?'"]
    QUERY --> AGENT["n8n AI Agent"]
    AGENT --> SEARCH["MCP: Search<br/>Documents"]
    SEARCH --> READ["MCP: Read<br/>Top 5 Results"]
    READ --> ANALYZE["Ollama: Analyze<br/>(DeepSeek R1 14B)"]
    ANALYZE --> FORMAT["Format Response<br/>with Citations"]
    FORMAT --> DELIVER["Email + Slack<br/>Notification"]
    
    style QUERY fill:#1E293B,color:#FAFAFA
    style AGENT fill:#F5A623,color:#0B1628
    style ANALYZE fill:#059669,color:#FAFAFA
    style DELIVER fill:#059669,color:#FAFAFA

The key differentiator: DeepSeek R1 with its chain-of-thought reasoning shows how it reached its analysis — not just the conclusion. For legal and financial use cases, this auditability is essential.

The Privacy Architecture

This is where local MCP + n8n + Ollama beats cloud alternatives:

ComponentWhere It RunsData Leaves?
n8n workflow engineYour serverNo
MCP serverYour serverNo
Ollama inferenceYour hardwareNo
Document storageYour filesystemNo
Total data exposureZero

Compare this to using ChatGPT + cloud MCP: your documents travel to OpenAI’s servers for processing. Under GDPR and the EU AI Act, local deployment eliminates the data transfer risk entirely.

Getting Started

  1. Install n8n: docker run -it --rm -p 5678:5678 n8nio/n8n
  2. Install Ollama: curl -fsSL https://ollama.com/install.sh | sh && ollama pull qwen2.5:7b
  3. Pick an MCP server: Start with filesystem for document Q&A
  4. Build your first workflow: Use the AI Agent node with MCP tools configured
  5. Test with real queries: Feed it actual business questions

For a simpler starting point, see our n8n + Ollama RAG tutorial which covers the basics of document Q&A without MCP.


Want us to build this for you? Schedule a free 15-minute assessment — we’ll evaluate your tool landscape and design an MCP-powered agent workflow tailored to your business.

Related tutorials: n8n AI Automation | n8n + Ollama RAG Pipeline | AI Code Review Workflow


Sources: MCP Official Specification | MCP Hits 97M Installs | Why MCP Won (The New Stack) | n8n AI Agents | MCP Roadmap 2026


Ready to Get Started?

VORLUX AI helps Spanish and European businesses deploy AI solutions that stay on your hardware, under your control. Whether you need edge AI deployment, LMS integration, or EU AI Act compliance consulting — we can help.

Book a free discovery call to discuss your AI strategy, or explore our services to see how we work.

Share: LinkedIn X
Newsletter

Access exclusive resources

Subscribe to unlock 230+ workflows, 43 agents, and 26 professional templates. Weekly insights, no spam.

Bonus: Free EU AI Act checklist when you subscribe
Once a week No spam Unsubscribe anytime
EU AI Act: 99 days to deadline

15 minutes to evaluate your case

No-commitment initial consultation. We analyze your infrastructure and recommend the optimal hybrid architecture.

No commitment 15 minutes Custom proposal

136 pages of free resources · 26 compliance templates · 22 certified devices