Colab load text-to-image model cause oom

I just want to play the model demo in google colab, when i install deps and run the demo code below:

import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16, token=token,use_safetensors=True)
pipe = pipe.to("cuda")

image = pipe(
    "A cat holding a sign that says hello world",
    negative_prompt="",
    num_inference_steps=28,
    guidance_scale=7.0,
).images[0]
image

i got an err below:

OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 

this error is throwed by pipe = pipe.to("cuda")

when i changed the image model to a text generation model, it works well.

so does google colab’s resource can’t support run the image model on huggingface?

i searched on the internet and found some guys can do this successfully, anyone know how to solve this problem? thanks