Pipe.to("cuda") not working

I’m using latest nvidea studio drivers
Pytorch cuda works on WSL ubuntu however i cannot run pipe.to(“cuda”) with stable difusion.

inside jupyterlab cell

from huggingface_hub import notebook_login
notebook_login() # ← although i enter my key hf_asfasfd… i cannot verify login is accepted

device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”)
print(device) # → reports cuda

in another cell

import torch
from diffusers import StableDiffusionPipeline
from PIL import Image

pipe = StableDiffusionPipeline.from_pretrained(“runwayml/stable-diffusion-v1-5”, torch_dtype=torch.float16) # ← i can see it downloaded the model so login was OK i guess
pipe = pipe.to(“cuda”) # ← kernel time out i got a 3080TX memmory stays low no indication it loaded

prompt = “a photo of a cat riding a horse on mars”
image = pipe(prompt).images[0]

image.show()

I have altered the config to perform kernel restart after 10 minutes to wait if it takes longer but this has no effect ( when the kernel dies eventually on the linux prompt i get AsyncIOLoopKernelRestarter: restarting kernel ) but no other errors on the screen or on the web page of the notebook.

I found there’s an update pipe for parallel computing using GPU, use this to improve it,

Original code:
pipe.to(“gpu”)

Then replace with:
pipe.to(device=“cuda”, dtype=torch.float16)