Flux.1-dev installation

I see all the time the same problem when i try to run this code. Do you have any ideas? It looks like it was stopped midway.

import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained(“black-forest-labs/FLUX.1-dev”, torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()

prompt = “a tiny astronaut hatching from an egg on the moon”
out = pipe(
prompt=prompt,
guidance_scale=3.5,
height=768,
width=1360,
num_inference_steps=50,
).images[0]
out.save(“image.png”)

(venv) PS C:\Users\alf\Desktop> python generate_image.py
Loading pipeline components…: 43%|██████████████████████▎ | 3/7 [00:00<00:00, 27.43it/s]You set add_prefix_space. The tokenizer needs to be converted from the slow tokenizers
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 14.23it/s]
Loading pipeline components…: 86%|████████████████████████████████████████████▌ | 6/7 [00:00<00:00, 8.61it/s]
(venv) PS C:\Users\alf\Desktop>

Loading Flux into RAM or VRAM with bfloat16 requires a significant amount of RAM and VRAM, is it enough?
I was experiencing a lack of RAM (probably) and freezing in the middle of the process (no error message, just stops).
We could save a memory bit by loading it in float8, but right now the library is buggy and doesn’t seem to work.

As a solution, if the cause was insufficient RAM or VRAM, it may be somehow sufficient to load each component separately and quantize it.

Details

About the Bug