View all articles
n8ncode reviewOllamaautomationDevOps

Automate Code Reviews with AI: n8n + Ollama Workflow Tutorial

VA
VORLUX AI
|

Automate Code Reviews with AI: n8n + Ollama Workflow

Every pull request deserves a review, but not every team has the bandwidth. This tutorial shows you how to build an AI-powered code review pipeline using n8n and a local LLM (via Ollama) — completely self-hosted, no API costs, no data leaving your network.

n8n AI workflow automation

What This Workflow Does

  1. Receives a webhook when a PR is opened on GitHub/GitLab
  2. Fetches the diff (code changes) from the PR
  3. Sends the diff to Ollama (local LLM) for analysis
  4. Formats the review into a structured report
  5. Posts the review to Discord (or Slack, email — your choice)
  6. Responds to the webhook confirming the review was posted

Total: 6 nodes, ~5 minutes to set up, zero recurring cost.

Architecture

GitHub PR Webhook → n8n → Fetch Diff → Ollama (local) → Format Review → Discord

The entire pipeline runs locally. Your code never touches a third-party API. This is critical for companies handling proprietary code or operating under GDPR constraints.

Prerequisites

  • n8n installed (self-hosted or cloud)
  • Ollama running with a code-capable model: ollama pull qwen2.5-coder:7b
  • GitHub/GitLab webhook access to your repository
  • Discord webhook URL (or Slack incoming webhook)

Step-by-Step Setup

1. Configure the Webhook Trigger

Create a new n8n workflow and add a Webhook node:

  • HTTP Method: POST
  • Path: /pr-review
  • This generates a URL like https://your-n8n.com/webhook/pr-review

Register this URL in your GitHub repo: Settings → Webhooks → Add webhook → Payload URL → Select “Pull requests” events.

2. Fetch the Diff

Add a Code node to extract the diff URL from the webhook payload:

const prUrl = $input.first().json.pull_request.diff_url;
const title = $input.first().json.pull_request.title;
const author = $input.first().json.pull_request.user.login;
return [{ json: { prUrl, title, author, diffUrl: prUrl } }];

3. Send to Ollama for AI Review

Add an HTTP Request node:

  • Method: POST
  • URL: http://localhost:11434/api/generate
  • Body (JSON):
{
  "model": "qwen2.5-coder:7b",
  "prompt": "Review this code diff. Focus on: bugs, security issues, performance problems, and code style. Be specific and reference line numbers.\n\nDiff:\n{{ $json.diff }}",
  "stream": false
}

This sends the diff to your local Ollama instance. The model runs on your hardware — no tokens billed, no data uploaded.

4. Format the Review

Add another Code node to structure the output:

const review = $input.first().json.response;
const title = $('Fetch Diff').first().json.title;
const author = $('Fetch Diff').first().json.author;

return [{
  json: {
    content: `**AI Code Review** 🤖\n**PR:** ${title}\n**Author:** ${author}\n\n${review}\n\n_Powered by Vorlux AI + Ollama (local inference)_`
  }
}];

5. Post to Discord

Add an HTTP Request node:

  • Method: POST
  • URL: Your Discord webhook URL
  • Body: { "content": "{{ $json.content }}" }

6. Respond to Webhook

Add a Respond to Webhook node returning { "status": "reviewed" }.

Download the Workflow

This exact workflow is available as a ready-to-import JSON file:

Download ai_code_review.json →

Import it in n8n: Settings → Import Workflow → Upload File.

Why Local AI for Code Review?

FactorCloud API (GPT-4)Local (Ollama + Qwen)
Cost~$0.03-0.10 per reviewEUR 0
PrivacyCode sent to OpenAI serversCode stays on your machine
Speed2-5 seconds (network)1-3 seconds (local)
GDPRRequires DPA with OpenAIFully compliant (local)
AvailabilityDepends on API uptimeAlways available
flowchart LR
    GH["GitHub PR"] --> WH["n8n Webhook"]
    WH --> DIFF["Fetch Diff"]
    DIFF --> AI["Ollama Review"]
    AI --> FMT["Format Output"]
    FMT --> DC["Discord Alert"]
    
    style GH fill:#1E293B,color:#FAFAFA
    style AI fill:#059669,color:#FAFAFA
    style DC fill:#F5A623,color:#0B1628

Choosing the Right Model for Code Review

Not all models are equal for code review tasks. Based on our testing across hundreds of PRs:

ModelStrengthsBest ForMemory
Qwen 2.5 Coder 7BInstruction-following, multi-languageGeneral code review~4.5GB
DeepSeek R1 14BChain-of-thought reasoningComplex logic bugs~10GB
Llama 3.3 70BDeep analysis, architectural feedbackArchitecture reviews~40GB
Phi-4 14BFast inference, concise outputQuick PR checks~9GB

For most teams, Qwen 2.5 Coder 7B provides the best balance of quality and speed. If your Mac has 32GB+ memory, consider running DeepSeek R1 14B for its superior reasoning on complex diffs.

Next Steps

  • Replace Discord with Slack or email notification
  • Add a GitHub comment node to post the review directly on the PR
  • Use structured outputs (Ollama JSON schema) for machine-parseable reviews
  • Chain with a test runner workflow for full CI/CD integration
  • Explore n8n’s Model Context Protocol (MCP) support for connecting agents to external tools

Share: LinkedIn X
Newsletter

Access exclusive resources

Subscribe to unlock 230+ workflows, 43 agents, and 26 professional templates. Weekly insights, no spam.

Bonus: Free EU AI Act checklist when you subscribe
Once a week No spam Unsubscribe anytime
EU AI Act: 99 days to deadline

15 minutes to evaluate your case

No-commitment initial consultation. We analyze your infrastructure and recommend the optimal hybrid architecture.

No commitment 15 minutes Custom proposal

136 pages of free resources · 26 compliance templates · 22 certified devices