Pepeline error in Colab

For below code:

Step 5: Load the base model and LoRA weights

from diffusers import DiffusionPipeline

import torch

print(ā€œLoading base FLUX.1-dev modelā€¦ā€)

Load the base model

pipe = DiffusionPipeline.from_pretrained( #This is the line where Iā€™m getting an error

ā€œblack-forest-labs/FLUX.1-devā€,

torch_dtype=torch.float16,

use_safetensors=True,

variant=ā€œfp16ā€

)

Move the model to GPU for faster processing

pipe = pipe.to(ā€œcudaā€)

Load the LoRA weights

print(ā€œLoading and applying LoRA weights from Amit71965/ai-amitā€¦ā€)

pipe.load_lora_weights(ā€œAmit71965/ai-amitā€)

Optional: Test the model with a sample prompt

test_prompt = ā€œAmit in a suit, professional portrait, high quality, detailedā€

print(f"Generating a test image with prompt: ā€˜{test_prompt}ā€™")

test_image = pipe(test_prompt).images[0]

test_image.save(os.path.join(base_dir, ā€œtest_image.pngā€))

print(f"Test image saved to {os.path.join(base_dir, ā€˜test_image.pngā€™)}")

For Step 5 cell code, After running the cell Iā€™m getting an error. Which is also attached as a screenshot.

How do I resolve this issue. Iā€™m trying to lead this models and then perform quantisation and then convert it to CoreML to embed in iOS Application.

1 Like

Try this.

# Load the base model
pipe = DiffusionPipeline.from_pretrained( #This is the line where Iā€™m getting an error
   "black-forest-labs/FLUX.1-dev",
    torch_dtype=torch.float16,
    use_safetensors=True,
    #variant=ā€œfp16ā€ # <= Not required in this case. Use this when you want to load a file that is named .fp16.safetensors
)