How to use GPU when using transformers.AutoModel

from transformers import AutoModel
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = AutoModel.from_pretrained("<pre train model>")
self.model(<tokenizer inputs>).to(device)

The above code fails on GPU device.
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

It fails because weights of the pre trained model is on CPU and the input data is on GPU.
Is there a parameter to pass in AutoModel.from_pretrained() to make it work on GPU ?
https://huggingface.co/transformers/v3.0.2/_modules/transformers/configuration_auto.html#AutoConfig.from_pretrained