Accessing model through Inference Endpoint

Hi,

I want to host a custom DreamBooth diffusion model and get an output in my application through an API, by sending a prompt. (instead of running the model locally) . Will I be able to do this with Inference Endpoints in Hugging Face? If so, can someone please guide me on where to start?

Yes that’s totally possible!

For custom models, it’s recommended to follow this guide: Create custom Inference Handler. Basically, it allows you to create an endpoint for any custom model.

So if I’m understanding correctly, the computational power of the machine running the application won’t matter as long as it can get the image output from the endpoint? I just want to get the generated image output to a simple Flask application without downloading the model. Are there any restrictions, and do I need to consider the paid version just for a personal project?