Checking model gpu usage

Hello, this might be a stupid question but is there a way to check which models are using which gpus?

I’m currently using the DPOTrainer from Hugging Face, which internally loads two models per GPU—the reference (ref) model and the policy model. I’m trying to ensure that the models are loaded on the correct GPUs.

Specifically, I want all the ref models to be loaded on GPU 0, and the policy models to be distributed across GPUs 1, 2, and 3.

Is there a way to verify which models are using which GPUs? I’ve tried searching through the documentation and other discussion posts but haven’t found a clear solution. Any advice on how to approach this would be greatly appreciated

1 Like

Perhaps this?

It seems that accelerator.prepare_model kind of overwrites the device_map strategy, but I’ll give it a try again, thanks!

1 Like