When i load the pipeline normally:
pipe = StableDiffusionXLPipeline.from_pretrained("./stable-diffusion", torch_dtype=torch.float16, variant="fp16")
my results are as expected, i give a prompt and i get an image with the correct output.
However, when i initialize all the components first, for example:
vae = vae = AutoencoderKL.from_pretrained("./stable-diffusion/vae", torch_dtype=torch.float16, variant="fp16", use_safetensors=True)
and then load the pipeline like so: pipe = StableDiffusionXLPipeline(vae, ...,)
I get distorted images that look like nothing according to the prompt.
I could easily load the classes normally, but i want to explore how all this works, and i just dont
understand. Am i not initializing something somewhere? Does the pipeline class have something uninitialized?