Inference API turned off ? Why?

Updated:

REALISTIC:

MANGA / ANIME / HENTAI:

OTHER:

1 Like

Updated:

REALISTIC:

MANGA / ANIME / HENTAI:

OTHER:

1 Like

Updated:

REALISTIC:

MANGA / ANIME / HENTAI:

OTHER:

Remember: if appears the message of “This model does no have enough activity…” wait a few hours and try again.

Updated:

REALISTIC:

MANGA / ANIME / HENTAI:

OTHER:

REMEMBER: if appears the message of “This model does no have enough activity…” wait a few hours and try again.

2 Likes

I heard that they are looking for requests regarding HF, so I made a petition.
It seems that there are people who are having trouble with the Serverless API as well.

2 Likes

I just made a very concise request, I hope they take it into account…

Hi folks,

The HF team is making progress on this, see https://huggingface.co/models?inference=warm&sort=trending to see which models currently are loaded (“warm”).

Thanks for the warning, in a way things improve.

Looks like the folks here are primarily interested in SD models.

But if you’re looking for text gen models - check out Hugging Face’s missing inference widget and the tech that powers it (featherless.ai)

I’ve looked into the state of HF LLMs before, using your space as a reference, and there seemed to be a lot of models that were not loaded (or configured not to be loaded) into the GPU.
Now I don’t know if this has always been the case or if it changed at the same time as SD…
Well, if the model is ready to load on the GPU, it tends to respond at the same speed as before.