How does LORA and training work with HuggingFace?

I’ve been playing around with replicate.com to easily provide an API solution to diffusion models and it works great when dealing with training images (ie: LORA). I was wondering how we can do the same with huggingface?

With Replicate, you essentially get a new version of the model, and then you can run specific prompts with the token of the subject and it’ll produce what you’ve trained.

How does huggingface’s interference API work? I’m a bit new to the huggingface ecosystem, and hoping someone can direct me to right direction.

Thanks!

Hi there!

For training within the HF ecosystem, you can get started with our colab Google Colab or duplicate this space: Dreambooth - a Hugging Face Space by autotrain-projects and associate a GPU to it. Once you upload a HF trained LoRA to a model repo, the inference widget will work out of the box and you can also use it with the inference api, you can check the inference api docs for it.

You can also bring your Replicate LoRA to Hugging Face using this Google Colab: Google Colab
but they dont yet support the inference widget

1 Like