Image-to-image Stable Diffusion Inference Endpoint?

The Inference API is free and designed for testing and experimenting with models on huggingface.co. It runs on shared infrastructure. This means that you are sharing the resources with other users, which could lead to high latency and low throughput there is no guarantee that the model has a GPU. The Inference API is not providing SLAs or other production-required features, like logs and monitoring.

Inference Endpoints on the other side support all of this.