Just fine-tuned pegasus-large on Google Colab Pro.
I create a Seq2SeqTrainer like so:
trainer = Seq2SeqTrainer(
model=model,
args=args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["test"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
And after running trainer.train()
, I execute:
!huggingface-cli login
!pip install hf-lfs
!git config --global user.email "jakemsc@example.com"
!git config --global user.name "JakeMSc"
trainer.push_to_hub("test_model")
Which then gives this output:
Saving model checkpoint to ./results
Configuration saved in ./results/config.json
Model weights saved in ./results/pytorch_model.bin
tokenizer config file saved in ./results/tokenizer_config.json
Special tokens file saved in ./results/special_tokens_map.json
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-21-a676d2b17752> in <module>()
----> 1 trainer.push_to_hub('test-model')
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in push_to_hub(self, commit_message, **kwargs)
2513 return
2514
-> 2515 return self.repo.push_to_hub(commit_message=commit_message)
2516
2517 #
AttributeError: 'Seq2SeqTrainer' object has no attribute 'repo'
System information:
-
transformers
version: 4.8.2 - Platform: Linux-5.4.104±x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Can’t see this error being discussed anywhere. Any advice?