Model loading always times out?

Hi everyone, just started using serverless inference api for a lora I trained.

In the first day after I uploaded the model, I could use the inference prompt from the Hugging Face model page. At one point, it stopped working and started returning time out messages.
Since 2-3 days I can’t use the model through HF. I tried creating a space, an inference endpoint, which all failed with the same message.
I created a new model and uploaded the safetensors file, tried to run it from its own HF model page, got the same time out message.
I thought it could be a problem with my account, but I asked friends to try, and they can’t run it either.

all the time for the last couple of days.

any ideas what the problem could be?