Diffusion Models Course - Unit 3

I am testing the following notebook diffusion-models-class/01_stable_diffusion_introduction.ipynb at main · huggingface/diffusion-models-class · GitHub, regarding the section Additional Pipelines / Inpainting, the following code: image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] generates the following error " TypeError: call() got an unexpected keyword argument ‘image’" Any ideas how to fix it?

Hi @royam0820! The name of the image parameter was recently renamed, so you might need to upgrade to a newer version of diffusers. Could you please try to run pip install --upgrade diffusers?

Hi, first of all, Happy New Year to you and my best wishes to you.
I did try your suggestion to upgrade to a newer version, but the same error is popping:
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] generates the following error " TypeError: call () got an unexpected keyword argument ‘image’"

Thanks a lot, and Happy New Year to you too :slight_smile:

That’s strange. Did you try to uninstall diffusers first? If you didn’t, you can use pip uninstall -y diffusers and then issue the install command again, just to verify there’s nothing weird going on.

If that doesn’t work, you can also run diffusers-cli env and post the output here so I can try to replicate in a similar environment. Are you running the notebook on Colab or somewhere else?

I hope we can sort this out :slight_smile:

Hi! I did try to uninstall and reinstall + upgrade and I have the same error.

Here is the environment infos- I am on Colab

  • diffusers version: 0.11.1
  • Platform: Linux-5.10.133±x86_64-with-glibc2.27
  • Python version: 3.8.16
  • PyTorch version (GPU?): 1.13.0+cu116 (True)
  • Huggingface_hub version: 0.11.1
  • Transformers version: 4.26.0.dev0
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Oh, I see the problem now! You are importing StableDiffusionInpaintPipeline but then you are instantiating your pipeline using StableDiffusionPipeline, which doesn’t know how to deal with input images. You need to create your in-painting pipeline like this:

pipe = StableDiffusionInpaintPipeline.from_pretrained(model_id).to(device)

Let me know if that works :slight_smile:

Good catch! I have been testing multiple features and this issue stemmed from various cut and paste. Thank you for pointing out this stupid mistake on my part and many thanks for your prompt help and support!

1 Like

No stupid mistake at all! It’s great that you are exploring everything, don’t hesitate to keep asking questions :slight_smile:

Thank you! :smiley: