HF Inference API last few minutes returns the same 404 exception to all models

It seems that major models such as FLUX and SD3.5 have returned to working condition. LoRA is still not working, but overall I feel that it has recovered to a state similar to a few days ago.