Training Model on CPU instead of GPU

I am using the transformer’s trainer API to train a BART model on server. The GPU space is enough, however, the training process only runs on CPU instead of GPU.

I tried to use cuda and jit from numba like this example to add function decorators, but it still doesn’t help.

What is the reason of it using CPU instead of GPU? How can I solve it and make the process run on GPU?

Thank you for your help!

The GPU will be automatically used by the Trainer, if that’s not the case, make sure you have properly installed your NVIDIA drivers and PyTorch.
Basically

import torch
torch.cuda.is_available()

should print True.