I am trying to run Inference on server to generate some images from text.
Here is the sample I am using: Run Inference on servers
(I added my personal token to bypass the limit).
Everytime I run this sample with the same prompt, the same image is generated. I would like to have something new generated always, like ChatGPT is doing.
I experimented with guidance_scale
but the image turns to complete garbage.
I used only default model and stabilityai/stable-diffusion-2-1