Problem:
For the past day or so, all My attempts to make POST requests to the public Inference API endpoint result in a 404 Not Found error. This happens regardless of the model I try to query, including standard, known-available models like gpt2. The response body simply contains “Not Found"“
My Hugging Face Username: Mehdimemar
Troubleshooting Steps Taken:
-
Model Validity Confirmed: I’ve tested numerous valid model IDs (like gpt2, distilbert-base-uncased-fine tuned-sst-2-english, various Segmentation models). The 404 error occurs consistently.
-
Access Token Verified: I have generated multiple new User Access Tokens from my account settings with the read role. I’ve carefully copied them and ensured they are correctly formatted in the Authorization: Bearer YOUR_HF_TOKEN header. Tried to write tokens as well, same result.
-
Network Connectivity Verified: nslookup, ping, and tracert to api-inference.huggingface.co are all successful from my testing environment. General internet connectivity is working fine (tested against httpbin.org).
-
Direct curl Test (Outside other platforms): To isolate the issue, I performed direct tests using curl from my local machine. These tests also result in the same 404 Not Found error. Example command available upon request.
-
Checked Hugging Face Status Page: The status page [1: status] indicates services are operational, though HF Inference shows some past instability. The persistent 404 error doesn’t seem like a temporary service unavailability (usually 503).
-
Checked Account Settings: I’ve reviewed my account settings (Tokens, Billing [though not required for public API], etc.) via [ 2: settings/tokens] and haven’t found any obvious issues, restrictions, or required actions. My email is verified