[Help] ComfyUI + ControlNet workflow — results look blurry / not realistic

Hi everyone,

I’m new to ComfyUI and ControlNet, and I’m trying to figure out why my workflow doesn’t produce good results.
The images I get look blurry / flat, and I can’t reach the level of realism I’ve seen in other examples.

I’ve uploaded my result images, workflow screenshot, and the workflow JSON here:

Could anyone take a look and tell me what I might be doing wrong?

  • Are my node connections correct?
  • Could it be my ControlNet or sampler settings?
  • Or is it more about the model choice?

Any suggestions would be greatly appreciated. Thanks!

1 Like

You’d probably be better off asking on ComfyUI’s GitHub or something…

Are my node connections correct?

Seems fine.

Could it be my ControlNet or sampler settings?

The sampler seems fine, but the ControlNet strength might be too low. While it shouldn’t be too high either, starting around 0.6 should be a good place to begin.

Or is it more about the model choice?

Probably so this time. Models based on the SD 1.5 architecture are designed to generate images at most up to 768x768 resolution, starting from 512x512. SDXL models are designed for around 1024x1024 resolution. While it varies significantly per individual model, in this case, the model architecture’s capabilities are likely insufficient for the target resolution.