Is it possible to run the text-to-image example on cpu?

The model running on CPU is too slow takes more than 2 hours to generate single image. Is there a way to generate multiple images at once? I am using the following code with these parameter values
{
“prompt”: “A capybara holding a sign that reads Hello World”,
“num_inference_steps”: 28,
“guidance_scale”: 3.5
}

from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_pretrained(model_name, cache_dir=cache_dir, torch_dtype=torch.float32)
pipe = pipe.to(“cpu”) # Move to CPU
image = pipe(
request.prompt,
num_inference_steps=request.num_inference_steps,
guidance_scale=request.guidance_scale,
).images[0]

1 Like