Today I happened to notice that for bart_large.save_pretrained(x) the model is 1.6GB on disk but the pretrained size (both in the local cache and on the model Hub) is 972M. However, for bart-base the trained size and pretrained sizes are the same. Looking at the difference between the model sizes, I think the 1.6GB is probably the “right” size. Anyone know how they are compressing the bart-large down to 972M for storage on the Hub and is there a way to do this to save space for my bigger trained models?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Any model's size is huge when saved as opposed to downloading from hub pretrained | 3 | 300 | February 17, 2024 | |
There seems to be a mistake in documentation (pretrained_models.html) regarding BART | 2 | 644 | October 26, 2020 | |
Why Is the Pytorch Checkpoint of Bart-large Smaller? | 0 | 367 | December 27, 2021 | |
What's the difference between bart-base tokenizer and bart-large tokenizer | 6 | 1916 | December 6, 2020 | |
Model saving results in a small size checkpoint | 1 | 585 | January 4, 2021 |