HF Inference API last few minutes returns the same 404 exception to all models

In the case of 401, there may be a token-related error or a problem with the gated model, so please refer to this post. However, at the moment, it may be due to a server error.:sweat_smile: