View all articles
ComfyUIControlNetimage generationtutorial

ComfyUI ControlNet Tutorial: Guided Image Generation with Edge Detection

VA
VORLUX AI
|

ComfyUI ControlNet Tutorial: Guided Image Generation

Want to generate an image that follows the exact structure of a reference photo? That’s what ControlNet does — it extracts the edges, pose, or depth from an image and uses that as a guide for generation. This tutorial shows you how to set it up in ComfyUI.

ComfyUI image generation workflow

What You’ll Build

A workflow that:

  1. Loads a reference image (e.g., a product photo, architectural sketch, or portrait)
  2. Extracts edge structure using Canny edge detection
  3. Generates a new image guided by those edges + your text prompt
  4. Runs 100% locally — no cloud, no API costs

Use cases: Product visualization, architectural rendering, brand-consistent content, style transfer.

Prerequisites

flowchart LR
    INPUT["Input Image"] --> CN["ControlNet<br/>Preprocessor"]
    CN --> COND["Conditioning"]
    PROMPT["Text Prompt"] --> COND
    COND --> SAMPLER["KSampler"]
    SAMPLER --> OUTPUT["Generated Image"]
    
    style INPUT fill:#1E293B,color:#FAFAFA
    style CN fill:#F5A623,color:#0B1628
    style OUTPUT fill:#059669,color:#FAFAFA

The Workflow (11 Nodes)

Reference Image → Canny Edge Detection → ControlNet Apply

Text Prompt → CLIP Encode ──────────────→ KSampler → VAE Decode → Save Image

Checkpoint → Model + VAE ─────────────────────┘

Node-by-Node Setup

#NodeSettings
1CheckpointLoaderSimpleSelect your SDXL or SD 1.5 checkpoint
2ControlNetLoaderLoad control-lora-canny-rank256.safetensors
3LoadImageUpload your reference image
4CannyEdgePreprocessorlow_threshold: 100, high_threshold: 200, resolution: 1024
5CLIPTextEncode (positive)Your prompt: “professional product photo, studio lighting”
6CLIPTextEncode (negative)“blurry, low quality, watermark, text”
7ControlNetApplyAdvancedstrength: 0.8, start_percent: 0, end_percent: 1.0
8EmptyLatentImagewidth: 1024, height: 1024, batch_size: 1
9KSamplersteps: 25, cfg: 7.5, sampler: euler_ancestral, scheduler: normal
10VAEDecodeConnect to KSampler output
11SaveImagefilename_prefix: “controlnet_“

Key Parameters

ParameterWhat it doesRecommended
strength (ControlNet)How strictly to follow the edge guide0.7-0.9 for structure, 0.3-0.5 for loose guidance
low_threshold (Canny)Minimum edge sensitivity100 (default)
high_threshold (Canny)Maximum edge sensitivity200 (default)
cfg (KSampler)How closely to follow the text prompt7-8 for balanced, 10+ for strict

Tips for Best Results

  1. Start with strong ControlNet (0.8+) and reduce if results look rigid
  2. Use line art or architectural photos as references — clean edges work best
  3. Match resolution: if your reference is 1024x1024, set EmptyLatentImage to 1024x1024
  4. Experiment with ControlNet types: Canny (edges), Depth (3D), Pose (human), Scribble (sketches)
  5. SDXL models produce higher quality than SD 1.5 but need more VRAM (6+ GB)

Download the Workflow

Ready-to-import JSON file:

Download vorlux_controlnet_canny_guided.json →

Import in ComfyUI: Load Workflow → Upload File.

Hardware Requirements

HardwareVRAMSpeedQuality
Mac Mini M4 (24GB)24GB unified~30s/imageExcellent
RTX 4060 Ti (16GB)16GB~15s/imageExcellent
RTX 3090 (24GB)24GB~10s/imageExcellent
Mac M3 Pro (32GB)32GB unified~25s/imageExcellent

Common Issues and Fixes

Even experienced users run into problems with ControlNet. Here are the most frequent issues and their solutions:

ProblemCauseFix
Output ignores reference imageControlNet strength too low or node not connected properlyVerify the ControlNet Apply node is wired between the CLIP encode and KSampler. Set strength to 0.8+ as a starting point.
Output looks like a blurry copyStrength too high combined with low CFGReduce ControlNet strength to 0.6-0.7 and raise CFG to 8-10 to give the text prompt more influence.
Black or blank output imageMismatched model types (SD 1.5 ControlNet with SDXL checkpoint)Ensure your ControlNet model matches your base checkpoint version. SDXL checkpoints need SDXL-compatible ControlNet models.
”ControlNet model not found” errorModel file in wrong directoryPlace .safetensors ControlNet files in ComfyUI/models/controlnet/, not in the checkpoints folder.
Canny edges too noisy or too sparseThreshold values not tuned for your imageFor photos with lots of detail, raise low_threshold to 150. For sketches or simple outlines, lower high_threshold to 150. Preview the Canny output node to check before generating.
Very slow generation (60s+)Resolution mismatch or model loaded on CPUMatch EmptyLatentImage resolution to your reference image. Check ComfyUI console output to confirm the model loaded on GPU, not CPU.

Tip: Always preview the Canny edge detection output before running the full generation. If the edges do not clearly capture the structure you want to preserve, adjust thresholds first. Generating with bad edges wastes compute time and produces unusable results.


Sources:


¿Listo para empezar?

VORLUX AI ayuda a empresas españolas y europeas a desplegar soluciones de IA que se quedan en tu hardware, bajo tu control. Ya necesites despliegue de IA en edge, integración LMS o consultoría de cumplimiento con la Ley de IA de la UE — podemos ayudarte.

Reserva una consulta gratuita para hablar de tu estrategia de IA, o explora nuestros servicios para ver cómo trabajamos.

Share: LinkedIn X
Newsletter

Access exclusive resources

Subscribe to unlock 230+ workflows, 43 agents, and 26 professional templates. Weekly insights, no spam.

Bonus: Free EU AI Act checklist when you subscribe
Once a week No spam Unsubscribe anytime
EU AI Act: 99 days to deadline

15 minutes to evaluate your case

No-commitment initial consultation. We analyze your infrastructure and recommend the optimal hybrid architecture.

No commitment 15 minutes Custom proposal

136 pages of free resources · 26 compliance templates · 22 certified devices