For below code:
Step 5: Load the base model and LoRA weights
from diffusers import DiffusionPipeline
import torch
print(āLoading base FLUX.1-dev modelā¦ā)
Load the base model
pipe = DiffusionPipeline.from_pretrained( #This is the line where Iām getting an error
āblack-forest-labs/FLUX.1-devā,
torch_dtype=torch.float16,
use_safetensors=True,
variant=āfp16ā
)
Move the model to GPU for faster processing
pipe = pipe.to(ācudaā)
Load the LoRA weights
print(āLoading and applying LoRA weights from Amit71965/ai-amitā¦ā)
pipe.load_lora_weights(āAmit71965/ai-amitā)
Optional: Test the model with a sample prompt
test_prompt = āAmit in a suit, professional portrait, high quality, detailedā
print(f"Generating a test image with prompt: ā{test_prompt}ā")
test_image = pipe(test_prompt).images[0]
test_image.save(os.path.join(base_dir, ātest_image.pngā))
print(f"Test image saved to {os.path.join(base_dir, ātest_image.pngā)}")
For Step 5 cell code, After running the cell Iām getting an error. Which is also attached as a screenshot.
How do I resolve this issue. Iām trying to lead this models and then perform quantisation and then convert it to CoreML to embed in iOS Application.