Hello everyone. Iâm trying to upload my fine-tuned GPT-2 Model to Model Hub.
When I try to use the uploading function push_to_hub I get the following error:
AttributeError: 'GPT2Model' object has no attribute 'push_to_hub'
In the documentation it says that I can push the model with this function.
Can anybody help please?
You need to update your Transformers library to the latest version.
Thank you so much. I have another question. Is there a way to push a model to the model hub after training without saving the new model to the disk with trainer.save_model() ? Would Trainer.push_to_hub do that or is it for pushing just the trainer instance? If yes how can I give my authentication token as a parameter to trainer.push_to_hub method?
You can set your token in the training arguments inside push_to_hub_token
.
The Trainer
will save the model on disk with save_model
, as you need to save it to the repo of the model you will be using before it can git push it to the model hub.
So you mean I should set push_to_hub_token in Training arguments. Then train the model and save it using trainer.save_model() function. Then I should just recreate it from the saved file using
fine_tuned_model = AutoModel.from_pretrained("./file_path")
And then finally push it to hub using
fine_tuned_model.push_to_hub("Name of model")
No, you should just do trainer.push_to_hub()
once youâre done with training. Look at any of the notebooks examples for more context, or the push to hub video.
Thank you very much. I was able to solve my problem after I saw the training argument âpush_to_hub_model_idâ and transformers-cli from your example notebook âTrain a language modelâ.
âFunctionalâ object has no attribute âpush_to_hubâ
I am facing this issue regarding push_to_hub API which does not support Functional models from TensorFlow. My model is an NMT that has custom objects.
It is a custom model which I have built from scratch and does not use any methods like from_pretrained.
I want to push it to huggingface as well as create a pipeline
Please help