Check how many TPU cores are using

I followed the instruction in notebooks/simple_nlp_example.ipynb at master · huggingface/notebooks · GitHub

to setup the environment.

Is there a function to check how many TPU cores the model is using? Like XLA’s xm.xrt_world_size()?

Thanks

If you follow the notebook, you will see the launcher says: “Launching training on 8 TPUs”. You can print accelerator.state in your training_function if you want to be sure of that.

Thanks! Yes I was adapting the code from the tutorial and in mine it doesn’t shows how many cores. I will add that and see if it shows.

Interesting in the tutorial we don’t need to specify device = accelerator.device and does not need to push model to device model = model.to(device). Perhaps this is the problem in my script that only uses 1 TPU core?

You should leave the device placement to the Accelerate library, yes.

1 Like