Build a Self-Improving AI Agent with n8n: The Learning Loop Workflow
Build a Self-Improving AI Agent with n8n: The Learning Loop Workflow
Most AI automation tutorials show you how to chain a trigger to an API call. That is useful, but it is not intelligent. What if your automation could learn from its own failures, detect patterns across hundreds of executions, and generate prioritized recommendations — all without human intervention?
That is exactly what the Agent Learning Loop workflow does. At Vorlux AI, we built this n8n workflow to monitor our own AI agent fleet. Every 24 hours, it analyzes execution traces, identifies recurring failure patterns, saves learnings to a memory API, and notifies our ops channel on Discord. In this tutorial, you will build it from scratch.

What You Will Build
A 5-node n8n workflow that:
- Triggers daily on a schedule
- Fetches agent execution traces from your orchestration API
- Analyzes success and failure patterns with JavaScript code
- Saves learnings to a persistent memory endpoint
- Sends a summary to Discord (or Slack, email, etc.)
Time to build: 30-45 minutes n8n version: 1.30+ (self-hosted or cloud) Difficulty: Intermediate
flowchart LR
CRON["⏰ Schedule<br/>Trigger (24h)"] --> FETCH["📥 Fetch<br/>Execution Traces"]
FETCH --> ANALYZE["🔍 Analyze<br/>Patterns (JS)"]
ANALYZE --> MEMORY["💾 Save<br/>Learnings (API)"]
MEMORY --> NOTIFY["📣 Discord<br/>Notification"]
style CRON fill:#F5A623,color:#0B1628
style FETCH fill:#1E293B,color:#FAFAFA
style ANALYZE fill:#1E293B,color:#FAFAFA
style MEMORY fill:#059669,color:#FAFAFA
style NOTIFY fill:#1E293B,color:#FAFAFA
Prerequisites
- A running n8n instance (self-hosted recommended for privacy — see our tools page for local deployment options)
- An API endpoint that exposes execution traces (we use our internal orchestrator at
localhost:3010, but any logging API works) - A Discord webhook URL (or substitute any notification channel)
Step 1: Schedule Trigger
Create a new workflow in n8n and add a Schedule Trigger node.
Configuration:
- Trigger interval: Every 1 day
- Field: days
This node fires once every 24 hours. For development, you can temporarily set it to every 5 minutes and test with the “Execute Workflow” button.
Tip: In production, schedule this during off-peak hours (e.g., 03:00 UTC) so you analyze a full day of agent activity.
Step 2: Fetch Recent Traces
Add an HTTP Request node connected to the schedule trigger.
Configuration:
- Method: GET
- URL:
http://your-api:3010/api/orchestrator/traces - Query Parameters:
period:24hstatus:all
- Timeout: 15000 ms
This fetches every agent execution from the last 24 hours, both successful and failed. The response should be an array of trace objects with at minimum: status, agent_id, error_type, and error_message fields.
If you do not have an orchestration API, you can substitute n8n’s own execution data. Use the n8n API endpoint GET /api/v1/executions with query params ?status=error&limit=100 to get recent failed executions.
Step 3: Analyze Success and Failure Patterns
This is the core intelligence of the workflow. Add a Code node and paste the following JavaScript:
const traces = $input.first().json.data
|| $input.first().json.traces
|| [];
const successes = traces.filter(
t => t.status === 'completed' || t.status === 'success'
);
const failures = traces.filter(
t => t.status === 'failed' || t.status === 'error'
);
// Group failures by agent + error type
const patterns = {};
failures.forEach(f => {
const key = f.agent_id + ':' + (f.error_type || 'unknown');
if (!patterns[key]) {
patterns[key] = {
agent_id: f.agent_id,
error_type: f.error_type || 'unknown',
count: 0,
examples: []
};
}
patterns[key].count++;
if (patterns[key].examples.length < 3) {
patterns[key].examples.push(f.error_message || f.message);
}
});
// Generate recommendations for recurring failures
const recommendations = Object.values(patterns)
.filter(p => p.count >= 2)
.map(p => ({
agent_id: p.agent_id,
recommendation: `Agent ${p.agent_id} failed ${p.count} times `
+ `with ${p.error_type}. Consider updating prompt or adding fallback.`,
priority: p.count >= 5 ? 'high' : 'medium'
}));
return [{
json: {
totalTraces: traces.length,
successes: successes.length,
failures: failures.length,
failurePatterns: Object.values(patterns),
recommendations,
generatedAt: new Date().toISOString()
}
}];
What this does:
- Separates successful and failed traces
- Groups failures by
agent_id+error_typeto detect recurring patterns - Generates actionable recommendations only for agents that failed 2+ times
- Assigns priority:
highfor 5+ failures,mediumfor 2-4
This is the “learning” part. Instead of just alerting on every error, you surface systemic issues that need architectural fixes.
Step 4: Save Learnings to Memory
Add an HTTP Request node to persist the analysis.
Configuration:
- Method: POST
- URL:
http://your-api:3010/api/memory/log - Body (JSON):
{
"type": "learning-loop",
"data": {
"recommendations": "{{ $json.recommendations }}",
"patterns": "{{ $json.failurePatterns }}"
},
"agent": "system"
}
This creates a persistent record. Over weeks, you build a knowledge base of what fails and why — enabling trend analysis and prompt improvement cycles.
Alternative: If you do not have a memory API, write to a Google Sheet, Notion database, or even a local JSON file via the n8n File node.
Step 5: Discord Notification
Add a second HTTP Request node (parallel to Step 4, not sequential).
Configuration:
- Method: POST
- URL:
{{ $env.DISCORD_OPS_WEBHOOK }} - Body (JSON):
{
"content": "**Agent Learning Loop**\nTraces analyzed: {{ $json.totalTraces }}\nSuccess: {{ $json.successes }} | Failures: {{ $json.failures }}\nRecommendations: {{ $json.recommendations.length }}"
}
Store your webhook URL as an n8n environment variable (DISCORD_OPS_WEBHOOK) rather than hardcoding it.
The Complete Workflow Architecture
[Daily Schedule] --> [Fetch Traces] --> [Analyze Patterns]
|
+--> [Save to Memory]
|
+--> [Discord Notify]
The analysis node fans out to two parallel outputs: persistence and notification. This means a failure in Discord delivery does not block your memory writes.
Extending the Workflow
Once the base loop is working, consider these enhancements:
- Auto-remediation: If a specific agent fails 10+ times, automatically disable it and notify the team
- Weekly digest: Add a second schedule trigger (weekly) that summarizes trends across 7 days of learning loop data
- Prompt auto-update: For agents with consistent failures, use an LLM node to suggest prompt improvements based on the error patterns
- Cost tracking: Add token/cost data to traces and track spend per agent over time
Download the Workflow
You can import this exact workflow into your n8n instance. Download the JSON from our workflow library — look for agent_learning_loop.json in the n8n collection.
Related reading
- Build an AI Content Pipeline with n8n: From RSS to Blog Post
- Docs Resource Evaluations 2026 02 07 Paul Rayner Agent Teams Linkedin
- Docs Resource Evaluations 2026 03 04 Agent Browser Vercel Labs
Why This Matters for Edge AI
At Vorlux AI, we deploy AI agents on local hardware (Jetson, Mac Mini, Intel NUC) for Spanish SMEs. When you run 20+ agents on edge infrastructure, you cannot afford to manually monitor each one. The Learning Loop gives us automated observability that improves our system without adding headcount.
This is the difference between “we deployed AI” and “we deployed AI that gets smarter.” If you are running any kind of multi-agent system, this pattern is essential.
Ready to automate your AI operations? Explore our full workflow library with 230+ n8n templates, or check out our tools page for the complete edge AI deployment stack.
Sources: n8n Documentation · Ollama API
Need help setting this up for your business? Contact us for a free 30-minute assessment.