Activating attention slicing leads to black images when running diffusion more than once

I get a black image when I run a pipeline the second time and enable_attention_slicing() is enabled.

I am following those instructions:

And this is the code I am using right now in a Jupyter notebook:

from tqdm import tqdm

from diffusers import DiffusionPipeline

from diffusers import DPMSolverMultistepScheduler

import torch

device = "mps" # cuda

model_id = "runwayml/stable-diffusion-v1-5"

pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, use_safetensors=True, )

pipeline.enable_attention_slicing() #

pipeline =

pipeline.safety_checker = None

pipeline.requires_safety_checker = False

pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)

generator = torch.Generator(device).manual_seed(0)

prompt = "portrait photo of a old warrior chief"

image = pipeline(prompt, generator=generator, num_inference_steps=5).images[0]


As soon as I run this a second time or e.g. when I run batch image generation I either get the NSWF warning. If I deactivate it by e.g. setting safety_checker = None I get black results only.

I have to restart the notebook to make it work again. But then again, only the first run works.

When I remove pipeline.enable_attention_slicing() everything works fine.

Is this a bug or is there something wrong with my config?

seed is fixed and steps is set to 5 to ease reproduction, changing that doesn’t help