Stable diffusion img2img: Continue from saved image

Using the Stable diffusion img2img, I’d like to eg. do 50 steps, save to png, then do 50 steps more from the saved png using the same prompt and seed.

But it doesn’t work out right… I tried taking out the resampling line in preprocess but it does the same.

I suspect it’s something else in the preprocess but I’m not entirely sure what it does

    image = np.array(image).astype(np.float32) / 255.0
    image = image[None].transpose(0, 3, 1, 2)
    image = torch.from_numpy(image)
    return 2.0 * image - 1.0

1 Like

Hi @andydhancock thanks for writing!

In general I think it’s not safe to assume that 50 + 50 steps is the same as 100 steps, as the scheduler state changes when you launch a new generation. Some people are using callbacks to retrieve intermediate images, see for example this github issue.

Could you share a bit more about your use case so people can recommend other suggestions?

1 Like

Hi @pcuenq, Thanks for the reply!

It’s a twitter bot I’ve written that gives people images based on their tweet. I’d like them to be able to take one of the images it’s outputted and run a few more steps on it to perfect it a bit.
Now a workaround will be just to see if they have requested it before and do a fresh 100 step generation on it, which I’ll probably end up doing, but I was intrigued to know why it wasn’t working as I expected…

I guess I’ve come at it with this incorrect assumption, quoted from your link:

Granted, that last bit is partly because some people start with incorrect assumptions how the inference process works and they expect the 20th step of a 100-step task will give them the same thing they see as the 20th step from a 20 step task,

1 Like

Yes, I think running a fresh generation would be the way to go in this case :slight_smile:

And this is very good feedback by the way, it’s not obvious how the process works when you start using the library. Thanks!

1 Like