When I type the exact same prompt to the exact same model 8 different times, it produces 8 different results. runwayml/stable-diffusion-v1-5 is the model I’m testing on at the moment.
What is happening inside the model that causes an input to produce different outputs?
Prompt is: a house
The lower textboxes are VIT descriptions produced by the images.
Interesting. It’s only randomly reproduceable, and sometimes does return the same value. “a house”, and “a dog” were returning different images, but are now returning the same image, but “a fish” returns varied images.
Check it out:
Now isn’t this strange. In this example, the model was sort of stuck on 2 images, being fed from the previous images short description, but then gained some creativity. What is happening here?
In this example, it was stuck for 10 frames before gaining creativity on the 11th frame.
(The images are loading left to right from the top, and each image is generated by the prompt below the preceding image to it’s left)
if I’m not wrong, each request will call the original Space with a new random seed, that will generate different images.
Random seed, alright, that is a solid clue, thanks!
I need to figure out how to mess with that random seed. I think sometimes more random is desirable, while other times no random is desirable.