Pytorch NLP model doesn’t use GPU when making inference

if you are using pipeline then you won’t need to put the model on GPU manually, pipline can handle that using the device parameter, just pass the gpu device number and it should work. Also you can just pass the BERT_DIR to model parameter, pipeline can load model itself. Try this and let me know.

nlp = pipeline("question-answering", model=BERT_DIR, device=0)
3 Likes