Device Type Error With Diffusers Pipeline TAT

Hi, I’m new here and have a problem that I encountered while studying a hugging face related blog

Environment

  • WIN 10 (remote ssh)
  • diffusers version: 0.10.2
  • Platform: Windows-10-10.0.19042-SP0
  • Python version: 3.8.16
  • PyTorch version (GPU?): 1.8.1 (True)
  • Huggingface_hub version: 0.15.1
  • Transformers version: 4.29.2
  • Using GPU in script?: <fill in>
  • Using distributed or parallel set-up in script?: <fill in>

Error Reproduce

First I downloaded the pre-trained model needed for the blog separately, instead of referring to the blog exactly, because it is large and difficult to download

I only downloaded the *.bin (pytorch version) for every module of stable-diffusion-v1-4 with LFS

Then I followed the blog and executed the following code

from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("./stable-diffusion-v1-4")

pipe.to("cuda")

prompt = "a photograph of an astronaut riding a horse"

image = pipe(prompt).images[0]
image.save(f"astronaut_rides_horse.png")

Error Info

---------------------------------------------------------------------------
RuntimeError      Traceback (most recent call last) 
Cell In[1], line 9
      5 pipe.to("cuda")
      7 prompt = "a photograph of an astronaut riding a horse"
----> 9 image = pipe(prompt).images[0]
     10 image.save(f"astronaut_rides_horse.png")

File h:\miniconda\conda\envs\deepke\lib\site-packages\torch\autograd\grad_mode.py:27, in _DecoratorContextManager.__call__..decorate_context(*args, **kwargs)
     24 @functools.wraps(func)
     25 def decorate_context(*args, **kwargs):
     26     with self.__class__():
---> 27         return func(*args, **kwargs)

File h:\miniconda\conda\envs\deepke\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py:477, in StableDiffusionPipeline.__call__(self, prompt, height, width, num_inference_steps, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, output_type, return_dict, callback, callback_steps)
    475 # 2. Define call parameters
    476 batch_size = 1 if isinstance(prompt, str) else len(prompt)
--> 477 device = self._execution_device
    478 # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
    479 # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
    480 # corresponds to doing no classifier free guidance.
    481 do_classifier_free_guidance = guidance_scale > 1.0

File h:\miniconda\conda\envs\deepke\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py:213, in StableDiffusionPipeline._execution_device(self)
    206 @property
...
--> 213     if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"):
    214         return self.device
    215     for module in self.unet.modules():

RuntimeError: Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hip, msnpu, xla, vulkan device type at start of device string: meta

I tried my best to search for related issues but didn’t find anything that matched my situation

I would appreciate it if you could help me

same here, have you found the solution yet?

I ran into the same error as well, turned out the version of pytorch that I was using was 1.8 which upon upgrading to the latest 2.0 resolved the error.