Persistent '404 Not Found' Error from Inference API on a specific machine

Hello,

I am getting a consistent 404 Not Found error when trying to use the Inference API from my computer. I have done extensive debugging and need help diagnosing a machine-specific issue.

I have confirmed that:

  • The error happens with both public models (gpt2) and gated models.

  • It happens when I bypass my Python code and use Invoke-WebRequest (curl) directly in my terminal.

  • It happens with two different, valid Hugging Face accounts.

  • Most importantly, it happens on two different internet connections (my home WiFi and my mobile hotspot).

This proves the issue is not my code, my account, or my network, but something on my specific machine’s configuration (firewall, proxy, security software, etc.) that is causing the connection to fail. Can anyone provide guidance on what to check on my computer?

1 Like

I think there are several possible causes, but if it’s a 404 error, the most likely candidate is probably related to the X-Forwarded-Host header.

Hi @murali00150 Thanks for posting! The model openai-community/gpt2 · Hugging Face is not available to use with Inference Providers at this time, though on the model page you can request provider support for it!

Note that other models you’re using might not be available with HF Inference but instead with one of the many other Inference Providers. The model page will also show you which providers are available.

If needed, you can also deploy the gpt2 model with Inference Endpoints (dedicated) instead: https://endpoints.huggingface.co.

To see which models are available to use with HF Inference, check out our filtered search here!

1 Like

Oh, really… I thought it meant it wouldn’t work on just one of the PCs.

Try using the deployed model.