TrainingArguments changing the GPU by iteslf

I have 4 gpus available, out of which i have selected the second one using

if torch.cuda.is_available():
    torch.cuda.set_device(2)

However when i compute the TrainingArgument() command :
training_args = TrainingArguments('mydirectory')

then torch.cuda.current_device(). is giving 0

Any idea why this is happening?

Yes, the training argument set the GPU corresponding to its local_rank value (for distributed training), so you have to make sure to pass along local_rank=2 when you instantiate them.

Though to execute a script on a given GPU, you would be better off setting the global env variable CUDA_VISIBLE_DEVICES.

1 Like