How does LORA and training work with HuggingFace?

Hi there!

For training within the HF ecosystem, you can get started with our colab Google Colab or duplicate this space: Dreambooth - a Hugging Face Space by autotrain-projects and associate a GPU to it. Once you upload a HF trained LoRA to a model repo, the inference widget will work out of the box and you can also use it with the inference api, you can check the inference api docs for it.

You can also bring your Replicate LoRA to Hugging Face using this Google Colab: Google Colab
but they dont yet support the inference widget

1 Like