[Announcement] Model Versioning: Upcoming changes to the model hub

I want to add some new models and have a Git error “Your push was rejected because it contains files larger than 10M.” (the PyTorch model .bin file) when I try to push files for the first one. I’m following these steps (excepting TF conversion, will add it later) and files for this model are : config.json, pytorch_model.bin, sentencepiece.bpe.model, special_tokens_map.json, tokenizer_config.json. It’s a “converted” CamemBERT model to Longformer, the architecture is not pretrained or finetuned for this one.

Yes, you need to track the large files using LFS (it is supposed to be documented in the error message you mentionned)

Did you install git-lfs?

Hi, thanks a lot for the update, very clear and easy to use. Quick question, how do we delete/rename models now?

Yes I installed it following the error and tried to track the file with it. Safely removed the file and added it back again to track. However git lfs migrate import --include="*.bin" would have been enough (no removal, if it can be useful to others). Thank you !

Here is the flow of commands I followed to make it works on Google Colab:

sudo apt-get install git-lfs

transformers-cli login
transformers-cli repo create your-model-name
git clone https://huggingface.co/username/your-model-name
cd your-model-name
git lfs install
git config --global user.email "email@example.com"
git add .
git commit -m "Initial commit"
git push https://username:password@huggingface.co/username/your-model-name
5 Likes

@lysandre @sgugger can we add this to the documentation somewhere (Colab example)?

I can try to put some colab together today.

Oh no I meant adding the few lines to perform in a Colab (from @mrm8488) to the doc page of model sharing in transformers!

1 Like

Done here.

1 Like

We just added file sizes, and download links, to the lists of model files, see for instance:

https://huggingface.co/dbmdz/bert-base-turkish-cased/tree/main

Feedback welcome!

1 Like

Hello,I have two questions.
1.How to delete models?
2.Can we control the generate length in the inference API?

To delete a model repository, right now you need to do it in code from the transformers library and do something like:

from transformers.hf_api import HfApi

HfApi().delete_repo(...)

We’ll ship a UI to do it directly from the website in the coming days, though. (cc @pierric @thomwolf)

For your second question (generation length in the Inference API), I’ll let @Narsil and @mfuntowicz reply!

2.Can we control the generate length in the inference API?

Yes you can !
curl -X POST -d '{"inputs": "Your input", "parameters": {"max_length": 200}}' https://api-inference.huggingface.co/models/gpt2 . Keep in mind this is for text-generation models and don’t overdo it, we cap the number of tokens anyway.

Does that help?
Cheers.

To delete a model repo you can now do it directly from the website, see 🔥 [New] You can now delete a model from the website!

1 Like