How to optimize inference of stable diffusion model when the images generated are of different seed but with same prompt?

I have a usecase where I have to generate around 100 images for the same prompt but each image has to be generated from a different random seed. I was hoping to find a way to optimize the time taken to generate these 100 images.

Current approach:

num_images=100
gen_images=[]
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
pipe.enable_xformers_memory_efficient_attention()
prompt = "a photo of an astronaut riding a horse on mars"
for i in range(num_images):
    image = pipe(prompt).images[0] 
    gen_images.append(image)

hey @abhijit1247 you can pass a torch generators to the pipeline. You won’t be able to have a single batch of 100 images but you’ll be able to generate them in batches with a different generator for each branch

I wanted to generate 25 images with the same prompt. but some are blurry images and are good. Why this is happening?