I’ve been playing around with replicate.com to easily provide an API solution to diffusion models and it works great when dealing with training images (ie: LORA). I was wondering how we can do the same with huggingface?
With Replicate, you essentially get a new version of the model, and then you can run specific prompts with the token of the subject and it’ll produce what you’ve trained.
How does huggingface’s interference API work? I’m a bit new to the huggingface ecosystem, and hoping someone can direct me to right direction.
Thanks!