Issue Summary
Space: chunchu-08/LLM-Comparison-Hub
Hardware: Paid CPU Upgrade (8 vCPU, 32 GB RAM) Free tier CPU- It worked fine two days ago but now it is not working either on free tier CPU or Paid CPU
Model: gpt-4
via openai>=1.0.0
SDK
Error:
GPT-4 call works perfectly on local machine with same key:
from openai import OpenAI
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello GPT-4"}]
)
Claude and Gemini models work fine in the same Space
.env is loaded correctly using load_dotenv()
No IP restrictions set on OpenAI key
Logs just show Connection error. — no traceback
What I’ve Tried
Tested OPENAI_API_KEY locally — works
Added logging using test_openai_connection.py
Restarted Space and tested again
Confirmed Claude/Gemini give valid outputs
GPT-4 fails silently
My Questions
Has Hugging Face introduced outbound network restrictions to api.openai.com recently?
Does even paid CPU Space share outbound IP pools that are rate-limited by OpenAI?
Any recommended workaround?