Check how many TPU cores are using

I followed the instruction in notebooks/simple_nlp_example.ipynb at master 路 huggingface/notebooks 路 GitHub

to setup the environment.

Is there a function to check how many TPU cores the model is using? Like XLA鈥檚 xm.xrt_world_size()?

Thanks

If you follow the notebook, you will see the launcher says: 鈥淟aunching training on 8 TPUs鈥. You can print accelerator.state in your training_function if you want to be sure of that.

Thanks! Yes I was adapting the code from the tutorial and in mine it doesn鈥檛 shows how many cores. I will add that and see if it shows.

Interesting in the tutorial we don鈥檛 need to specify device = accelerator.device and does not need to push model to device model = model.to(device). Perhaps this is the problem in my script that only uses 1 TPU core?

You should leave the device placement to the Accelerate library, yes.

1 Like