How to use Transformer Trainer report_to method with Accelerator Library?

How to use Transformer Trainer Training Arguments report_to method in Accelerator? Do I need to make manually calculate each data like loss etc. and send to tensorboard or wandb?

Hello @Indramal, yes, you are correct. Please refer to the *no_trainer.py examples here transformers/examples/pytorch at main · huggingface/transformers · GitHub.

For example, run_glue_no_trainer.py has arg report_to transformers/run_glue_no_trainer.py at main · huggingface/transformers · GitHub. You can log to the trackers via accelerator.log as shown here transformers/run_glue_no_trainer.py at main · huggingface/transformers · GitHub.

Hope this helps. Let us know if this resolves your query.

1 Like

Thank you very much i will try it and get back to you