Too many error when i prompt

i have too many error when i prompt on api inference model page like there is some ping or server doesn’t respond. I am forced to refresh and compute again or wait that prompt works again then it works likde 5 or 10 min then error again for 10 min.

i have still same issues on space. I have fiber internet

Hi. That’s a problem with the HF site itself, so you can reply to the post below.

But generally speaking, at HF we are able to generate images with models in SD 1.5 and 2.0 sizes just fine, but rarely are we able to generate them with models in SDXL or larger sizes reliably.
The Serverless Inference API is not working well anyway for now.

I think there’s a little bit of Zero GPU space that can be generated stably there for the space you could use.

1 Like

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.