Hugging Face Forums
Push 4-bit converted model to hub
Models
ckandemir
October 27, 2023, 7:28am
2
Following PRs do handle this issue^
I will just wait until they are merged
1 Like
show post in topic
Related topics
Topic
Replies
Views
Activity
Pushing a quantized (4bit) model on the Hub
🤗Transformers
9
4255
January 8, 2024
Can I load a model fine-tuned with LoRA 4-bit quantization as an 8-bit model?
🤗Hub
0
291
November 27, 2023
Problem with pushing quantized model to hub
🤗Transformers
3
315
October 14, 2024
An error i ve been trying to fix for days now
Intermediate
4
490
November 19, 2024
How can I load an LLM in 4-bits
🤗Transformers
0
487
August 2, 2023