"AttributeError: 'Seq2SeqTrainer' object has no attribute 'repo'" after running trainer.push_to_hub()

Just fine-tuned pegasus-large on Google Colab Pro.

I create a Seq2SeqTrainer like so:

trainer = Seq2SeqTrainer(

And after running trainer.train(), I execute:

!huggingface-cli login
!pip install hf-lfs
!git config --global user.email "jakemsc@example.com"
!git config --global user.name "JakeMSc"

Which then gives this output:

Saving model checkpoint to ./results
Configuration saved in ./results/config.json
Model weights saved in ./results/pytorch_model.bin
tokenizer config file saved in ./results/tokenizer_config.json
Special tokens file saved in ./results/special_tokens_map.json
AttributeError                            Traceback (most recent call last)
<ipython-input-21-a676d2b17752> in <module>()
----> 1 trainer.push_to_hub('test-model')

/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in push_to_hub(self, commit_message, **kwargs)
   2513             return
-> 2515         return self.repo.push_to_hub(commit_message=commit_message)
   2517     #

AttributeError: 'Seq2SeqTrainer' object has no attribute 'repo'

System information:

  • transformers version: 4.8.2
  • Platform: Linux-5.4.104±x86_64-with-Ubuntu-18.04-bionic
  • Python version: 3.7.10
  • PyTorch version (GPU?): 1.9.0+cu102 (True)
  • Tensorflow version (GPU?): 2.5.0 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: No

Can’t see this error being discussed anywhere. Any advice?

hey @JakeMSc

this error is a bit odd because it suggests the Trainer.init_git_repo function is not being called which can only happen if TrainingArguments.push_to_hub is not set to True.

in particular, i could not reproduce your error by pushing a seq2seq model to the hub with the official translation tutorial here: Google Colaboratory

perhaps you can also try running the push to hub part of the tutorial notebook in your environment to see if it’s a problem in your configuration?

See solution here: