Image-to-Image with Flux in ComfyUI
Workflow

Level Up Your AI Image Generation: The Ultimate Guide to Image-to-Image with Flux

Aug 23, 2024 · The Local Lab

Text-to-image is powerful, but it only gets you so far. The moment you introduce a source image into your workflow — something to guide the composition, style, or subject — you unlock a completely different level of creative control. That's what image-to-image (img2img) is all about, and with Flux in ComfyUI, the results are better than they've ever been.

This guide covers everything: what image-to-image actually does, the key settings that control the output, real-world use cases, and a step-by-step setup in ComfyUI using Flux Dev.

What Is Image-to-Image?

Image-to-image generation uses an existing image as a starting point alongside your text prompt. Instead of generating from pure noise, the model begins with a noisy version of your source image and refines it toward your prompt description.

The practical result: the AI output inherits elements of your original image — composition, subject shape, lighting, color palette — while the text prompt guides the style, mood, and details of the transformation. You control how much influence the source image has versus how freely the AI interprets the prompt.

What You Can Do With It

Style Transfer

Take a photograph and reimagine it as an oil painting, a comic book panel, a watercolor, or any other aesthetic — while keeping the underlying composition intact.

Consistent Character Variations

Generate multiple images of the same character in different poses, outfits, or settings while maintaining recognizable visual consistency across the series.

Creative Transformations

Reimagine a photo of your pet as an astronaut, turn a vacation snapshot into surreal artwork, or transform a product photo into a rendered illustration.

Graphic Novel & Storyboarding

Maintain a consistent visual style across a series of scenes by using each generated panel as the source image for the next, creating a coherent visual thread.

Upscaling & Enhancement

Pass a lower-quality or AI-generated image back through the model to add detail, fix artifacts, or refine areas that didn't come out quite right.

Product Visualization

Take a product photo and generate styled marketing renders in different environments, lighting setups, or artistic treatments without a full photoshoot.

The Key Setting: Denoise Strength

The most important control in any img2img workflow is the denoise strength (sometimes called denoising strength or noise level). This single value determines how much the model can deviate from your source image.

0.1 – 0.3
Subtle changes
0.4 – 0.6
Style shift
0.7 – 0.85
Creative freedom
0.9 – 1.0
Near text-to-image
Start at 0.6 and adjust from there. It's the most versatile starting point — enough transformation to feel creative, enough source influence to keep the output grounded in your original image.

Setting Up Img2Img with Flux in ComfyUI

1
Load Flux Dev in ComfyUI

Open ComfyUI and load Flux Dev as your base model. Make sure you have the VAE loaded as well — Flux requires its own VAE file (ae.safetensors) to encode and decode images correctly.

2
Add a Load Image node

Right-click the canvas → Add Node → Image → Load Image. This is where you drop in your source image. You can also use the Load Image from URL node if you're working with web images.

3
Encode the image with the VAE

Connect your Load Image node to a VAE Encode node, then connect that to your sampler's latent_image input. This converts your source image into the latent space that Flux works in.

4
Set the denoise strength on the sampler

On your KSampler node, find the denoise parameter. Set it to your starting value — try 0.6 for a balanced style transfer. This is the main control you'll be adjusting between runs.

5
Write your prompt and queue

Describe the style, mood, or transformation you want in the positive prompt. Be specific — Flux follows detailed prompts well. Queue the generation and review the output. Adjust denoise strength up or down based on how much transformation occurred.

Combining Img2Img with LoRAs

Image-to-image becomes even more powerful when you add LoRAs to the mix. A style LoRA (trained on a specific artist or aesthetic) combined with a moderate denoise strength gives you precise control over exactly what kind of transformation happens — not just "make it look like a painting" but "make it look like this specific illustrator's work."

Similarly, a subject LoRA lets you inject a consistent character or face into variations of a scene while the img2img source provides the compositional starting point. This combination is the foundation of consistent character generation workflows.

Tip: When combining LoRAs with img2img, start with lower LoRA strength (0.6–0.8) than you'd use for text-to-image. The source image is already providing structural guidance — the LoRA just needs to handle the style layer on top.

Common Mistakes to Avoid

Watch the full video above for a hands-on walkthrough with real before-and-after examples showing exactly what different denoise values do to the same source image.

Ready to try this yourself?

Our one-click ComfyUI installer comes pre-configured with Flux workflows — img2img included. No setup headaches.

Get the Installer ← More Articles