Trying to run multiple loras on FLUX.1 dev

I see cool models like the LoRa explorer but it only applies one Lora. I’d really like to use 2 or 3.
Is there any way to do this with the current models on hugging face?

You can use up to 3 or 4 LoRAs at a time in my space. It’s heavy and hard to stabilize the output when used at the same time, though.

YUS! I knew you’d be the dude to know. Thanks for the tip.

1 Like

Loving the mod you made. I want to reference two of my private loras. I see In the advance settings where I can load multiple. For the file name would I just point to the exact folder of the safetensors file in my duplicated private space? Or point to my model repo?

For the file name would I just point to the exact folder of the safetensors file in my duplicated private space?

If a repo name is entered in the filename field, it will automatically search for the repo and bring up a list of files, so you can choose from there.

However, I haven’t made it support reading from private repos yet. I could, but I would need your token in some way.
If you’re going to duplicate the whole space and set the environment variable, then the modification is easy, but if not, there’s no right way to do this.
I’ve added a login function, but that’s for extending Quota, and it would be kind of scary to have that require a token.
Create a token input field…?

Thanks I will be duplicating it to a private space and I’ll try your advice.

If you set HF_TOKEN now, that’s only used to read the LLM for Pro, so you’ll have to modify the code separately for private repo reference. Once you have the duplicated space open, I’ll commit it as I see fit.
If you can do it yourself, I think you can do a lot better that way.

I have tried Flux on mimicPC according to its Lora training tutorial; the proficiency made for a surprisingly graphic experience to run multiple loras.