Hugging Face Forums
Too large to be loaded automatically (16GB > 10GB) issue with QWEN 2.5 VL 7B
Inference Endpoints on the Hub
John6666
April 15, 2025, 2:41am
2
Same here. Maybe related to this incident.
show post in topic
Related topics
Topic
Replies
Views
Activity
The model mistralai/Mistral-7B-Instruct-v0.1 is too large to be loaded automatically (14GB > 10GB)
Models
2
61
April 15, 2025
Issue with ALLaM-7B Model in Inference API - Size Limitation Error
Inference Endpoints on the Hub
1
34
March 7, 2025
404 - "{\"error\":\"Model XLabs-AI/flux-RealismLora does not exist\"}"
Models
9
136
April 16, 2025
Fail to deploy newer models
Inference Endpoints on the Hub
4
149
February 5, 2025
Meta-llama / Meta-Llama-3-70B-Instruct is not available as a serverless API
Models
10
1475
September 28, 2024