Hi,
Just wondering how often we should be expecting these inference endpoints to be unavailable
~1 week ago the endpoint for the BAAI:bge-large-en-v1.5 (https://router.huggingface.co/hf-inference/models/BAAI/bge-large-en-v1.5/pipeline/feature-extraction) was down for almost a day. It came back up no worries, but back on it again now and it’s started timing out again with “504 Server Error: Gateway Time-out for url….”
Is this something expected often? The huggingface status page is always showing up as 100%ish uptime. Is this an effect of not being on the pro plan?