Problem:
For the past day or so, all My attempts to make POST requests to the public Inference API endpoint result in a 404 Not Found error. This happens regardless of the model I try to query, including standard, known-available models like gpt2. The response body simply contains “Not Found"“
My Hugging Face Username: Mehdimemar
Troubleshooting Steps Taken:
Model Validity Confirmed: I’ve tested numerous valid model IDs (like gpt2, distilbert-base-uncased-fine tuned-sst-2-english, various Segmentation models). The 404 error occurs consistently.
Access Token Verified: I have generated multiple new User Access Tokens from my account settings with the read role. I’ve carefully copied them and ensured they are correctly formatted in the Authorization: Bearer YOUR_HF_TOKEN header. Tried to write tokens as well, same result.
Network Connectivity Verified: nslookup, ping, and tracert to api-inference.huggingface.co are all successful from my testing environment. General internet connectivity is working fine (tested against httpbin.org).
Direct curl Test (Outside other platforms): To isolate the issue, I performed direct tests using curl from my local machine. These tests also result in the same 404 Not Found error. Example command available upon request.
Checked Hugging Face Status Page: The status page [1: status] indicates services are operational, though HF Inference shows some past instability. The persistent 404 error doesn’t seem like a temporary service unavailability (usually 503).
Checked Account Settings: I’ve reviewed my account settings (Tokens, Billing [though not required for public API], etc.) via [ 2: settings/tokens] and haven’t found any obvious issues, restrictions, or required actions. My email is verified
Given that network connectivity is fine, valid models are being used, valid tokens (with correct permissions) seem to be sent correctly (verified by curl -v), the issue strongly suggests a problem with token validation specific to my account (Mehdimemar) or an unknown restriction/status issue with my account preventing Inference API access.
Has anyone else experienced similar persistent 404 errors recently? Is there anything specific I should double-check, or could this require investigation by the Hugging Face team?
I am also experiencing the same issue and I checked with many others too. This issue is been coming for few days. I think its some bug in their system and some its because they are shifting to other inference providers
Me too. I need to do a huge training task for an application, and I paid for the pro tier, plus already paid a good chunk extra for the first part of the training task. Really frustrating that I now am stuck without being able to complete the task even though I forked out a bunch. Is there no way to get an official reply? I haven’t seen any way of contacting support, if it exists. The official status page says all systems go.
Hi all, thanks for reporting! You can check to see if your model is available to use with the HF Inference API (or any Inference Provider) here: Models - Hugging Face. If it’s not deployed by any Inference Provider, you can request provider support on the model page.
Please note Inference Endpoints is available to use - more info here: Inference Endpoints.