I hope to use the flux model to perform correct image migration, but the graphics card 2080ti I am using has a bottleneck and cannot use the exclusive IPAdapter of the flux model. Is there a better way?

In the interior design industry, people always go online to learn from reference pictures of various cases. I hope that what people see is what they get. They do not need to use the hands of designers, but can generate it by uploading their own pictures and reference pictures. Based on the reference picture of my own scene, I have tried this process for a long time. Only relying on controlnet and picture inference seems to be ineffective, and there is a phenomenon that the elements in the picture are out of physical reality. Perhaps IPAdapter can solve this problem? But it seems that the IPAdapter in flux is not perfect yet, but the image-generating effect of flux fascinates me. I hope to achieve this goal through flux. I hope to launch this function so that ordinary people can experience the convenience of AI.

Better than my GPU, but I don’t think it is going to be a bit of a stretch for FLUX…
If we use FLUX normally, it loads at 16 bits, but if we use NF4 quantized, it loads at 4 bits, which means we only need a quarter of the VRAM. Why don’t you give that a try?

Currently, HF Diffusers is still working on NF4 support, and although it can now read and write, it can only be reduced to 8 bits in memory, so I recommend using WebUI Forge or ComfyUI.

I am using the gguf flux model, and the application platform is also in comfyui, but I cannot advance the work when using controlnet and IPAdapter at the same time because my graphics card has insufficient memory.

I have another question. I connected the IPAdapter node of the sdxl model to the workflow with flux as the main model. Surprisingly, the workflow did not report an error, but the IPAdapter did not play a role and was directly ignored. , I know that there are barriers between different models, but since the workflow does not report errors, is there any chance that we can combine the IPAdapter of sdxl with the flux model? I’m a designer and I don’t know much about the content of the code. I don’t know what your occupation is. Maybe you are interested in solving these problems?

Unfortunately, I am not at all familiar with IP adapters and control nets, having used them only briefly.
In fact, I have only been working with AI for about six months, so everything is still in the exploration stage. I have not yet reached the stage where I can connect it to actual use cases.

I am interested in the topic of Flux because I often suffer from lack of RAM and VRAM in my hobby of coding.
I might be able to help you at least with saving memory, presenting tips on coding, and introducing people who might know in the HF area…

When using the control net with Flux, even HF’s Zero GPU space, which is said to be 40GB VRAM A100, is actually just barely enough. Memory consumption suddenly jumps up. It is possible to reduce the size of the output image, but it is too large in the first place…

I did a little searching and it seems that the IP adapter for flux is even much more VRAM intensive than the control net.
And it doesn’t look like I can use GGUF or NF4.
The workaround seems to be to give up on dev, offload some to RAM and use schnell to generate in 4 steps, which seems to be the limit for our class of GPUs. there may be a way to use dev’s 8steps Hyper FLUX instead of schnell.

Anyway, I wouldn’t call renting a paid GPU from Google or HF or buying an expensive GPU a solution…
You will want to make do with what you have.

madebyollin/taef1 · Hugging Face VAE with preview visible from 1 step
ByteDance/Hyper-SD · Hugging Face 8-step generation with dev
FLUX Realtime - a Hugging Face Space by KingNish Further speeding up schnell

thanks for your hlpe,i will ture to someting

Sorry I can’t help much.
Since HF is a developer’s den, there may be a workaround if you visit them directly, as you might expect. You can use the “mention”(@+username) and “Discussion” (select a repo and “New Discussion”) functions.
lllyasviel (Lvmin Zhang) (Someone who is very familiar with the special use of image generation. Author of WebUI Forge)
XLabs-AI/flux-ip-adapter · Hugging Face (One of the developers of the IP adapter for Flux)

And since the people who look at the forum are a very small minority, basically no one is aware of it unless they are contacted directly.

flux for some of the requirements of the graphics card is a little high, I can only through the mimicpc online to run flux, the details of the processing is really better than sd3,
and mimicpc handles details quite well too, the proficiency made for a surprisingly graphic experience. But for some of the handling of fine-tuning is not yet very proficient, so flux to learn the road will be more and more long piles.