How to run the Causal Language modelling example on multiple gpu?

How can i run transformers/examples/pytorch/language-modeling/run_clm_no_trainer.py at main 路 huggingface/transformers 路 GitHub on 2 gpus ?