n8n + MCP: Connect Your AI Agents to Any Business Tool
n8n + MCP: Connect Your AI Agents to Any Business Tool
The Model Context Protocol (MCP) solved a problem that held back AI agents for years: how does an AI model talk to your tools? Before MCP, every integration was custom — one connector for Slack, another for your database, another for your CRM. Each broke independently, required separate maintenance, and locked you into a specific AI provider.
MCP changed that. With 97 million installs as of March 2026, it’s now the universal standard adopted by Claude, ChatGPT, Gemini, Copilot, VS Code, and — critically for automation — n8n.
This tutorial shows you how to combine n8n’s workflow engine with MCP to build AI agents that use your actual business tools.

What MCP Does (30-Second Version)
MCP is a standard protocol that lets AI models discover and use external tools. Think of it as USB for AI — plug in a tool, and any AI model can use it.
flowchart LR
AI["AI Model<br/>(Ollama, Claude, GPT)"]
MCP["MCP Protocol"]
AI <--> MCP
MCP <--> DB["Database"]
MCP <--> CRM["CRM"]
MCP <--> DOCS["Documents"]
MCP <--> API["Your APIs"]
MCP <--> LMS["Docebo LMS"]
style AI fill:#F5A623,color:#0B1628
style MCP fill:#059669,color:#FAFAFA
Without MCP, you write custom integration code for each tool + each AI model. With MCP, you write one MCP server per tool, and every AI model can use it.
Why n8n + MCP Is Powerful
n8n brings three things MCP alone doesn’t provide:
- Workflow orchestration: MCP connects AI to tools. n8n orchestrates multi-step workflows that use those connections — “fetch the document, analyze it, update the CRM, notify the team.”
- Visual design: Build agent workflows by dragging nodes, not writing code.
- Self-hosted: Your n8n instance runs locally. Combined with Ollama for inference, the entire pipeline stays on your hardware.
Building an MCP-Powered Agent Workflow
Prerequisites
- n8n installed (self-hosted recommended)
- Ollama running with a capable model (
qwen2.5:7bor better) - An MCP server for your tool (see “Available MCP Servers” below)
Step 1: Set Up an MCP Server
MCP servers expose your tools to AI models. Here’s an example using the filesystem MCP server (lets AI read/search your documents):
# Install the filesystem MCP server
npx @anthropic-ai/mcp-server-filesystem /path/to/your/documents
This starts a server that exposes file reading, searching, and listing as MCP tools. Your AI agent can now ask “find all contracts mentioning renewal clauses” and the MCP server translates that into filesystem operations.
Step 2: Create the n8n Workflow
In n8n, build a workflow with these nodes:
- Webhook Trigger — receives queries from users or other systems
- AI Agent Node — connects to Ollama + MCP tools
- Code Node — processes the agent’s response
- Output Node — sends results (email, Slack, webhook response)
The AI Agent node is where MCP integration happens. Configure it with:
- Model: Your Ollama endpoint (
http://localhost:11434) - Tools: Your MCP server endpoints
- System prompt: Define the agent’s role and tool usage instructions
Step 3: Define the Agent’s Behavior
The system prompt determines how your agent uses MCP tools:
You are a document analyst for a law firm. You have access to the firm's
document repository via MCP tools. When asked a question:
1. Search for relevant documents using the search tool
2. Read the most relevant documents
3. Analyze the content and provide a sourced answer
4. Always cite the document name and section
Never make up information. If you can't find relevant documents, say so.
Step 4: Test and Deploy
Run a test query through the webhook:
curl -X POST http://localhost:5678/webhook/mcp-agent \
-H "Content-Type: application/json" \
-d '{"query": "What are the payment terms in the Martinez contract?"}'
The agent will: receive the query → search documents via MCP → read relevant files → analyze with Ollama → return a sourced answer. All locally.
Available MCP Servers (10,000+)
The MCP ecosystem now has 10,000+ public servers. Key ones for businesses:
| MCP Server | What It Does | Use Case |
|---|---|---|
| Filesystem | Read/search local files | Document Q&A, contract analysis |
| PostgreSQL | Query databases | Business intelligence, reporting |
| Slack | Send/read messages | Team notifications, escalations |
| Google Drive | Access cloud docs | Shared document analysis |
| GitHub | Manage repos, issues, PRs | Development workflow automation |
| Notion | Read/write pages | Knowledge base management |
| Stripe | Manage payments | Invoice processing, reconciliation |
| Docebo | LMS integration | Training analytics, learner progress |
Docebo’s April 2026 MCP integration is particularly relevant — it makes your LMS a knowledge source for AI assistants, enabling agents to answer training-related questions from actual course content.
Real Example: Document Intelligence Agent
Here’s a complete workflow we deploy for law firms and consulting businesses:
flowchart TD
QUERY["User Query<br/>'What are the liability caps<br/>in our vendor contracts?'"]
QUERY --> AGENT["n8n AI Agent"]
AGENT --> SEARCH["MCP: Search<br/>Documents"]
SEARCH --> READ["MCP: Read<br/>Top 5 Results"]
READ --> ANALYZE["Ollama: Analyze<br/>(DeepSeek R1 14B)"]
ANALYZE --> FORMAT["Format Response<br/>with Citations"]
FORMAT --> DELIVER["Email + Slack<br/>Notification"]
style QUERY fill:#1E293B,color:#FAFAFA
style AGENT fill:#F5A623,color:#0B1628
style ANALYZE fill:#059669,color:#FAFAFA
style DELIVER fill:#059669,color:#FAFAFA
The key differentiator: DeepSeek R1 with its chain-of-thought reasoning shows how it reached its analysis — not just the conclusion. For legal and financial use cases, this auditability is essential.
The Privacy Architecture
This is where local MCP + n8n + Ollama beats cloud alternatives:
| Component | Where It Runs | Data Leaves? |
|---|---|---|
| n8n workflow engine | Your server | No |
| MCP server | Your server | No |
| Ollama inference | Your hardware | No |
| Document storage | Your filesystem | No |
| Total data exposure | Zero |
Compare this to using ChatGPT + cloud MCP: your documents travel to OpenAI’s servers for processing. Under GDPR and the EU AI Act, local deployment eliminates the data transfer risk entirely.
Getting Started
- Install n8n:
docker run -it --rm -p 5678:5678 n8nio/n8n - Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh && ollama pull qwen2.5:7b - Pick an MCP server: Start with filesystem for document Q&A
- Build your first workflow: Use the AI Agent node with MCP tools configured
- Test with real queries: Feed it actual business questions
For a simpler starting point, see our n8n + Ollama RAG tutorial which covers the basics of document Q&A without MCP.
Want us to build this for you? Schedule a free 15-minute assessment — we’ll evaluate your tool landscape and design an MCP-powered agent workflow tailored to your business.
Related tutorials: n8n AI Automation | n8n + Ollama RAG Pipeline | AI Code Review Workflow
Sources: MCP Official Specification | MCP Hits 97M Installs | Why MCP Won (The New Stack) | n8n AI Agents | MCP Roadmap 2026
Related reading
- Docebo MCP: Connect Your LMS to AI Assistants in 5 Minutes
- How to Deploy AI Locally in Your Business: Complete 2026 Guide
- ComfyUI Batch Image Generation: Create 100 Product Images in Minutes
Ready to Get Started?
VORLUX AI helps Spanish and European businesses deploy AI solutions that stay on your hardware, under your control. Whether you need edge AI deployment, LMS integration, or EU AI Act compliance consulting — we can help.
Book a free discovery call to discuss your AI strategy, or explore our services to see how we work.