Late to the party, how do I handle a (NSFW) image generation model I uploaded to Vertex AI?

Just one more thing I would like to know before I mark this as solved:

I now also got a FLUX script with a GUI like the one above to work but some things are not clear to me since flux uses a different format for predict requests. With the models I tried there is a 77 token limit in place like for SDXL but from what I have read this should not be the case. I managed to get prompt_2 to work with SDXL and I noticed that the FLUX models I tried also have a second text encoder but I dont know how to set this up for FLUX in my script.
Then I am not sure which parameters I can send this way to influence image generation on FLUX, so for now I only have resolution, inference steps and guidance scale.
And last but not least I would love to know if there is a way to add LORAs to preconfigured models on Vertex AI, since I dont think I am able to upload or modify files.

btw. I will post the updated SDXL script+GUI and the finished FLUX script+GUI once everything is working.

1 Like

With the models I tried there is a 77 token limit in place like for SDXL but from what I have read this should not be the case. I managed to get prompt_2 to work with SDXL and I noticed that the FLUX models I tried also have a second text encoder but I dont know how to set this up for FLUX in my script.

This is a warning that some people often confuse.:sweat_smile: It is just a display of the limitations of CLIP, the first text encoder of FLUX, and for T5, the second text encoder, it is passed normally after 77 tokens. No additional settings are required.

LoRA

  1. Upload one LoRA file to the HF model repo
  2. Upload README.md, which includes the following header, to the same repo
  3. From now on, this repo will function as a new model with LoRA applied via Endpoint

Basically, this is all you need to do. However, please note that the LoRA strength is fixed at 1.0 in this method.
Incidentally, the SD1.5, SDXL, and SD3.5 LoRA, as well as adapters for LLM, basically work in the same way.

---
base_model:  black-forest-labs/FLUX.1-dev
instance_prompt: car rollsroyce
---

Please set the so-called trigger word in the instance_prompt. It will work even if you donā€™t do this, but it will save you time and mistakes.

1 Like