Pytorch NLP model doesn’t use GPU when making inference

Hi, how do we determine the GPU device number? I am deploying my model to a sagemaker endpoint if that matters.

UPDATE:
For anyone else wondering, 0 is the default for the GPU (see definition of device from pipeline documentation below:

  • device (int, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, >=0 will run the model on the associated CUDA device id.