ComfyUI Batch Image Generation: Create 100 Product Images in Minutes
ComfyUI Batch Image Generation: 100 Product Images in Minutes
Need consistent product images, social media graphics, or marketing assets? ComfyUI’s batch generation workflow lets you create hundreds of variations from a single prompt template — all running locally on your hardware with zero API costs.

flowchart LR
A["Load SDXL\nCheckpoint"] --> B["Set Prompt\nTemplate"]
B --> C["Configure\nBatch Size\n& Resolution"]
C --> D["KSampler\n(4 steps, CFG 1.0)"]
D --> E["VAE Decode"]
E --> F["Save Images\n(auto-named)"]
B --> NEG["Negative\nPrompt"]
NEG --> D
C --> SEED["Fixed Seed\n(reproducible)"]
SEED --> D
F --> OUT["output/\nbatch_001.png\nbatch_002.png\n..."]
style A fill:#DBEAFE,stroke:#2563EB,color:#000
style B fill:#FEF3C7,stroke:#F5A623,color:#000
style C fill:#FEF3C7,stroke:#F5A623,color:#000
style D fill:#D1FAE5,stroke:#059669,color:#000
style E fill:#D1FAE5,stroke:#059669,color:#000
style F fill:#D1FAE5,stroke:#059669,color:#000
style NEG fill:#FECACA,stroke:#B91C1C,color:#000
style SEED fill:#DBEAFE,stroke:#2563EB,color:#000
style OUT fill:#D1FAE5,stroke:#059669,color:#000
What This Workflow Does
Our Batch Image Generation workflow for ComfyUI:
- Takes a text prompt template with variables (product name, style, background)
- Generates multiple variations using SDXL Turbo for speed
- Outputs consistent, branded images at configurable resolutions
- Saves to organized folders with automatic naming
Hardware requirement: 8GB+ VRAM (RTX 4060 or better) or 16GB+ unified memory (Mac Mini M4)
Download the Workflow
Download vorlux_batch_image_generation.json — import directly into ComfyUI.
Step-by-Step Setup
1. Install ComfyUI
If you don’t have ComfyUI yet:
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
pip install -r requirements.txt
python main.py
Open http://localhost:8188 in your browser.
2. Import the Workflow
Drag the downloaded .json file onto the ComfyUI canvas. The workflow includes:
- KSampler node configured for SDXL Turbo (4 steps, CFG 1.0)
- Batch size parameter (default: 10 images per run)
- Resolution presets: 512x512, 1024x1024, 1200x628 (social), 1920x1080 (HD)
- Save Image node with automatic naming
3. Configure Your Prompt Template
The workflow uses a prompt template with placeholders:
Professional product photo of [PRODUCT],
clean white background, studio lighting,
[STYLE] aesthetic, high resolution, sharp focus
Replace [PRODUCT] and [STYLE] with your specifics. Examples:
- “Professional product photo of a Mac Mini M4, clean white background, minimalist aesthetic”
- “Social media banner for AI consulting company, dark navy background, modern tech aesthetic”
4. Set Batch Size and Resolution
In the Empty Latent Image node:
- Width/Height: Choose your target resolution
- Batch size: Set how many images to generate (1-50 per batch)
For SDXL Turbo at 512x512:
- RTX 4060: ~2 seconds per image
- Mac Mini M4: ~3 seconds per image
- RTX 4090: ~0.8 seconds per image
5. Queue and Generate
Click “Queue Prompt” — your batch generates automatically. Images save to ComfyUI/output/ with sequential numbering.
Real-World Use Cases
| Use Case | Images Needed | Time (RTX 4060) | Cloud API Cost | Local Cost |
|---|---|---|---|---|
| Product catalog (50 items) | 50 | ~2 minutes | ~EUR 5 (DALL-E) | EUR 0 |
| Social media month (30 posts) | 30 | ~1 minute | ~EUR 3 | EUR 0 |
| Website hero variations | 10 | ~20 seconds | ~EUR 1 | EUR 0 |
| A/B test thumbnails | 20 | ~40 seconds | ~EUR 2 | EUR 0 |
Tips for Consistent Results
- Use a fixed seed for reproducible batches — change seed to get new variations
- SDXL Turbo is fastest for batch work (4 steps vs 20+ for standard SDXL)
- Negative prompts matter: add “blurry, low quality, distorted” to filter bad outputs
- ControlNet for consistency: use a reference image to keep style uniform across batches
Optimizing Batch Generation Speed
If you are generating 50+ images per session, these optimizations can cut your total time by 30-50%:
- Enable VAE tiling. In the VAE Decode node, check “tile” if available. This reduces VRAM spikes on large batches and prevents out-of-memory crashes, especially at 1024x1024.
- Use FP16 precision. When loading your checkpoint, select the
fp16variant if your GPU supports it (all RTX cards do). This halves memory usage and speeds up inference with negligible quality loss. - Generate at lower resolution, then upscale. Generate at 512x512 with SDXL Turbo (fastest), then run a second batch through a 2x upscaler node. The total time is often less than generating directly at 1024x1024.
- Queue multiple small batches instead of one large batch. A batch size of 10 queued 5 times is more stable than a single batch of 50, which can cause VRAM fragmentation and slowdowns.
- Close other GPU-consuming applications. Discord, browser hardware acceleration, and video players all compete for VRAM. On an 8GB card, freeing even 500MB can prevent swapping to system RAM (which is 10x slower).
- Pin your model in memory. If using ComfyUI Manager, enable “keep models in VRAM” to avoid reloading the checkpoint between queues. This alone saves 3-5 seconds per batch on NVMe storage, more on HDD.
For production workloads generating 500+ images per day, consider a dedicated Mac Mini M4 (24GB) running ComfyUI as a headless image server. At EUR 700 one-time cost, it replaces roughly EUR 50-100/month in cloud API spending.
Related reading
- ComfyUI ControlNet Tutorial: Guided Image Generation with Edge Detection
- Docebo MCP: Connect Your LMS to AI Assistants in 5 Minutes
- How to Deploy AI Locally in Your Business: Complete 2026 Guide
Related Resources
- 92 ComfyUI Workflows — browse our full visual generation library
- Hardware Catalog — 13 devices benchmarked for ComfyUI
- ROI Calculator — compare local generation vs cloud API costs
- 4,346 Custom Nodes Reference — downloadable node list
Sources: ComfyUI GitHub · Stable Diffusion Models
Need help implementing AI workflows in your business? Schedule a free consultation to design a pipeline tailored to your needs.