Hey guyss, I’m trying to use the diffusers library with andite/pastel-mix
=> https://huggingface.co/andite/pastel-mix
How do you set up the VAE here?
Hi @AliceM!, I’m not sure I fully understand your question, looking at the model repo it looks like it’s a valid diffusers checkpoint. To use it you’d use code similar to the one that appears in the model card:
from diffusers import StableDiffusionPipeline
import torch
model_id = "andite/pastel-mix"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "hatsune_miku"
image = pipe(prompt).images[0]
image.save("./hatsune_miku.png")
Is that not working for you?
I’m only getting dull colors with that snippet…
After experimenting a lot, it seems like it only works when I put another vae on it:
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse").to("cuda")
pipeline.vae = vae
This feels very dirty though so I wonder why it doesn’t work…
Oh, I see. It would appear that the model was trained with a different vae but the model card was not updated to reflect that fact. In addition, I see in the root of the repo a few different vae
models, I’m not sure how they are supposed to be used.
I’d recommend you open a discussion in the repo itself asking for clarification. I’d ask the author if it is possible to update the model card with instructions, and ideally update the vae
folder of the repo so that it contains the recommended vae
instead of the default one. To open a discussion, please use the Community
tab in the repo.
Hope that helps