500 Internal Error - We're working hard to fix this as soon as possible

Getting the same error - Internal Error - We’re working hard to fix this as soon as possible!

Request ID: Root=1-67fe27c3-5a8df80471acbaca17722c9d

1 Like

Same error 500 on all models :frowning:

1 Like

yes, i met same issue.

1 Like

Same issue. 500 Server Error. Request ID: 6bda04f5-2400-4a97-a597-5303c1df67ec

1 Like

can’t even access my space (and code!!!) to try to move to Google Colab.
Not really sure why I started here on huggingface anyway. The free tier gives
only 0,10 cent HF inference, useless.

500

Internal Error - We’re working hard to fix this as soon as possible!

Request ID: Root=1-67fe296b-6d6254084e7046d51a6b3e3f

1 Like

Same. Internal Error - We’re working hard to fix this as soon as possible!

Request ID: Root=1-67fe2a5e-735617e41ce09c21054d473e

1 Like

Hi everyone, thanks for reporting. We’re investigating and we’ll get back to you all real soon. Thanks in advance for bearing with us :hugs:

7 Likes

i got the same error now !

2 Likes

ok we will wait it was happend yesterday also .

1 Like

Hmm…

From HF Discord:

Tom Aarsen
I don’t have an ETA I’m afraid. The Infra team is investigating and trying to get everything back online.

1 Like

All should be starting to look better now :hugs: if that’s not the case, please let us know. And a big thanks to everyone for reporting and bearing with us, we appreciate it!

2 Likes

I committed a new script .py, but after I restarted the space I continue having the space from the previous .py file . Is this related to the error?

1 Like

There was a case in the past where git rewinding occurred due to an error in the dev mode function.

500 Server Error: Internal Server Error for url: https://router.huggingface.co/hf-inference/models/openai/whisper-large-v3-turbo (Request ID: Root=1-67fe5d0b-7f4a576d3904e1033fa45ace;2670767a-aec9-47e0-9b72-d0a5f89691ef)

unknown error

is this error somehow related to this issue??

1 Like

i quit trying to be hopeful, I see people say things are being worked on, to just wait, or even that things are fixed now. but im seeing the same problems i saw an hour ago and a day before that and a day before that, onward to a month.

i understand, these things take time, and can be stressful for us, and as equally stressful for the team,
but i mean, if things are actually this bad, maybe we’re ALL in the wrong place.
the only information i myself have gotten, (and information, being that of HELPFUL information in any quantity is nice)
is numbers
500,503 404 , and (200 + “too many requests”),
and no, nothings gotten better, its only gotten worse over a long stretch
now, there seems as though there was a sudden containment breach and now a massive swath of models have fallen
github DOES, continue to show errors in status reports, and that, i could understand.
but again, it says things got better, until we see they’ve actually continued on.

i have NO right to not remain calm, no right, not to be understanding

but im really not understanding whats going on here, unless it is git hub or was simply obfuscation and a planned dissimulation.
dis¡sim¡u¡la¡tion
/dəˌsimyəˈlāSH(ə)n,ˌdiˌsimyəˈlāSH(ə)n/


noun

  1. concealment of one’s thoughts, feelings, or character; pretense.

that would seem easier to accept. it is simple,
because so far, accepting any good news has actually grown in complexity since it only leads me deeper and deeper into disappointment

i know its not as simple as i would wish, and thats why ill just end this saying
i mean all of what ive said in the deepest sincerity and respect

2 Likes

The Inference API error feels different from the 500 errors we’ve been getting up until now, but since it’s a 500 error, it might be related.

As for Whisper, even if that’s resolved, there seems to be another kind of error…

1 Like

<RANT_MODE_ON>
There i was - thinking on going PRO to support the platform. Servers cost money, so this felt only fair. But then - again! - they broke the whole effing thing.

Comming back here is always a “I really hope the scripts still work” moment. Had this three times now - feels like in a big company where there’s no version management, some PY gurus play around with the codebase, not caring if their changes break any code.

This - good people at hugging face - is unprofessional.

If you want an example how this should be done - check out PHP. Fair warnings if anything gets deprecated, stable snapshots that work after years.

Over here this is hunting through forums, discord and comments on models/spaces in the hope to find a soulution for a problem you didn’t cause. And big thanks to John6666 for mostly finding a solution.

Yeah, rants don’t help anybody - but please get real! Do something like a “stable” version and a “experimental/dev” fork.
</RANT_MODE_ON>

4 Likes

And 5 days later… it is still happening! seriously?

I’m about to cancel my Pro subscription

3 Likes

A :hugs: marked 500 error will appear even if it is a temporary error. Or perhaps the error screen is originally intended for that purpose. In my experience, it tends to appear when Zero GPU is overloaded. In such cases, it is often fixed within 5 minutes.

Yeah come on guys, this outage is feeling pretty serious now. As a paying subscriber using HF for multiple projects, it’s pretty hard to fathom why this is taking so long to resolve (or at least why there hasn’t been an update).

As an interim / alternative while this gets resolved, I’d recommend shifting to Fireworks (I am not on commission!). E.g. To use Whisper turbo, the API is only slightly different to HF (and results are seemingly very fast):

import requests

AUDIO_FILE = "XXX"
FIREWORKS_API_KEY = "XXX"
API_HEADERS = {"Authorization": f"Bearer {FIREWORKS_API_KEY}"}
API_URL = "https://audio-turbo.us-virginia-1.direct.fireworks.ai/v1/audio/transcriptions"

res = requests.post(API_URL, headers=API_HEADERS, files={"file": AUDIO_FILE"}, data={
    "model": "whisper-v3-turbo",
    "temperature": "0",
    "vad_model": "silero"
})
data = res.json()

https://fireworks.ai/models/fireworks/whisper-v3-turbo

2 Likes