How can I trace trainer.state.log_history in Multi-GPU environment?

Hello.
I am trying to train RoBERTa model from scratch.
I successfully train model with Trainer.
But, When I check the trainer.state.log_history, there was nothing.
This situation occurred only on Multi-GPU training.
When I use Single-GPU, log_history was exist.

How can I get log_history in Multi-GPU training?

I found the answer by myself.
In TrainingArguments, just set log_on_each_node=True.

1 Like