How can I generate the exact same image twice using AI image generation tools?

Hello everyone,

I am a third-year student studying ICT and business, and I find the technology behind diffusion models very interesting. I have posted the same question on r/StableDiffusion, but I hope to gain more insights here as well.

I’ve been experimenting with various AI image generation tools, but I haven’t been able to generate the exact same image twice from the same prompt. I’m seeking advice on how to achieve consistent image outputs.

Tools I’ve tested:

  • ChatGPT
  • Adobe Firefly
  • Llama 3.2
  • ComfyUI

I’ve tried:
Using the same prompt multiple times.
Looking for options to set a seed value, but haven’t found a way in these tools.

My goal: Generate the identical image every time I use the same prompt.

Questions:

  1. Has anyone successfully generated the same image multiple times using these or similar tools?
  2. If so, could you please share how you achieved this consistency?
  3. Are there specific settings or methods to control randomness in these models?
  • I’m particularly interested in techniques applicable to Stable Diffusion models.
  • Any documentation or resources on this topic would be greatly appreciated.

Thanks for your help!

1 Like

Looking for options to set a seed value, but haven’t found a way in these tools.

This is correct, I’ll look for a sample.

Edit:
There is a seed in the Advanced tab, so please try fixing it. It should be possible with ComfyUI…

Sorry for the late reply. Your suggestion gave me a clearer direction and helped me make progress, even though adjustments are still needed. Thank you for your time and input.

1 Like

You can get consistent results by setting the same seed and using the exact same prompt, model, and settings.

One trick I’ve used is saving the seed and image to prompt details in a log file, so I can recreate it later if needed. Sometimes even tiny differences in guidance scale or number of steps can change the output.

2 Likes

Hi! Really interesting question — reproducibility in image generation is surprisingly hard even when prompts are fixed. That’s actually one of the reasons I’m building my next AI tool called Blur Blur Blur.

It’s built on top of something I call the TXT OS + WFGY Reasoning Engine, which isn’t just another frontend — it lets you lock down prompt parameters semantically, including ΔS (semantic tension) and λ_observe (subjective vector), to produce highly repeatable results even across sessions.

The system’s goal is exactly what you’re asking: to make diffusion output behave deterministically, not just by fixing seeds, but by understanding the internal ‘semantic structure’ of prompts and how they unfold in vector space.

Still in early release, but you’re welcome to try it:
:backhand_index_pointing_right: WFGY/OS/BlurBlurBlur at main · onestardao/WFGY · GitHub

Would love to hear what you think if you give it a spin.