Getting the same error - Internal Error - Weâre working hard to fix this as soon as possible!
Request ID: Root=1-67fe27c3-5a8df80471acbaca17722c9d
Getting the same error - Internal Error - Weâre working hard to fix this as soon as possible!
Request ID: Root=1-67fe27c3-5a8df80471acbaca17722c9d
Same error 500 on all models
yes, i met same issue.
Same issue. 500 Server Error. Request ID: 6bda04f5-2400-4a97-a597-5303c1df67ec
canât even access my space (and code!!!) to try to move to Google Colab.
Not really sure why I started here on huggingface anyway. The free tier gives
only 0,10 cent HF inference, useless.
Internal Error - Weâre working hard to fix this as soon as possible!
Request ID: Root=1-67fe296b-6d6254084e7046d51a6b3e3f
Same. Internal Error - Weâre working hard to fix this as soon as possible!
Request ID: Root=1-67fe2a5e-735617e41ce09c21054d473e
Hi everyone, thanks for reporting. Weâre investigating and weâll get back to you all real soon. Thanks in advance for bearing with us
ok we will wait it was happend yesterday also .
HmmâŚ
From HF Discord:
Tom Aarsen
I donât have an ETA Iâm afraid. The Infra team is investigating and trying to get everything back online.
All should be starting to look better now if thatâs not the case, please let us know. And a big thanks to everyone for reporting and bearing with us, we appreciate it!
I committed a new script .py, but after I restarted the space I continue having the space from the previous .py file . Is this related to the error?
There was a case in the past where git rewinding occurred due to an error in the dev mode function.
500 Server Error: Internal Server Error for url: https://router.huggingface.co/hf-inference/models/openai/whisper-large-v3-turbo (Request ID: Root=1-67fe5d0b-7f4a576d3904e1033fa45ace;2670767a-aec9-47e0-9b72-d0a5f89691ef)
unknown error
is this error somehow related to this issue??
i quit trying to be hopeful, I see people say things are being worked on, to just wait, or even that things are fixed now. but im seeing the same problems i saw an hour ago and a day before that and a day before that, onward to a month.
i understand, these things take time, and can be stressful for us, and as equally stressful for the team,
but i mean, if things are actually this bad, maybe weâre ALL in the wrong place.
the only information i myself have gotten, (and information, being that of HELPFUL information in any quantity is nice)
is numbers
500,503 404 , and (200 + âtoo many requestsâ),
and no, nothings gotten better, its only gotten worse over a long stretch
now, there seems as though there was a sudden containment breach and now a massive swath of models have fallen
github DOES, continue to show errors in status reports, and that, i could understand.
but again, it says things got better, until we see theyâve actually continued on.
i have NO right to not remain calm, no right, not to be understanding
but im really not understanding whats going on here, unless it is git hub or was simply obfuscation and a planned dissimulation.
dis¡sim¡u¡la¡tion
/dÉËsimyÉËlÄSH(É)n,ËdiËsimyÉËlÄSH(É)n/
noun
that would seem easier to accept. it is simple,
because so far, accepting any good news has actually grown in complexity since it only leads me deeper and deeper into disappointment
i know its not as simple as i would wish, and thats why ill just end this saying
i mean all of what ive said in the deepest sincerity and respect
The Inference API error feels different from the 500 errors weâve been getting up until now, but since itâs a 500 error, it might be related.
As for Whisper, even if thatâs resolved, there seems to be another kind of errorâŚ
<RANT_MODE_ON>
There i was - thinking on going PRO to support the platform. Servers cost money, so this felt only fair. But then - again! - they broke the whole effing thing.
Comming back here is always a âI really hope the scripts still workâ moment. Had this three times now - feels like in a big company where thereâs no version management, some PY gurus play around with the codebase, not caring if their changes break any code.
This - good people at hugging face - is unprofessional.
If you want an example how this should be done - check out PHP. Fair warnings if anything gets deprecated, stable snapshots that work after years.
Over here this is hunting through forums, discord and comments on models/spaces in the hope to find a soulution for a problem you didnât cause. And big thanks to John6666 for mostly finding a solution.
Yeah, rants donât help anybody - but please get real! Do something like a âstableâ version and a âexperimental/devâ fork.
</RANT_MODE_ON>
And 5 days later⌠it is still happening! seriously?
Iâm about to cancel my Pro subscription
A marked 500 error will appear even if it is a temporary error. Or perhaps the error screen is originally intended for that purpose. In my experience, it tends to appear when Zero GPU is overloaded. In such cases, it is often fixed within 5 minutes.
Yeah come on guys, this outage is feeling pretty serious now. As a paying subscriber using HF for multiple projects, itâs pretty hard to fathom why this is taking so long to resolve (or at least why there hasnât been an update).
As an interim / alternative while this gets resolved, Iâd recommend shifting to Fireworks (I am not on commission!). E.g. To use Whisper turbo, the API is only slightly different to HF (and results are seemingly very fast):
import requests
AUDIO_FILE = "XXX"
FIREWORKS_API_KEY = "XXX"
API_HEADERS = {"Authorization": f"Bearer {FIREWORKS_API_KEY}"}
API_URL = "https://audio-turbo.us-virginia-1.direct.fireworks.ai/v1/audio/transcriptions"
res = requests.post(API_URL, headers=API_HEADERS, files={"file": AUDIO_FILE"}, data={
"model": "whisper-v3-turbo",
"temperature": "0",
"vad_model": "silero"
})
data = res.json()