I’m (a NLP newbie) trying to use the zero-shot models on a system without a GPU. None of the models seem to work. Can this work without a CPU?
example code:
from transformers import pipeline
classifier = pipeline(“zero-shot-classification”, model=‘joeddav/xlm-roberta-large-xnli’, device=-1)
sequence = “За кого вы голосуете в 2020 году?”
candidate_labels = [“Europe”, “public health”, “politics”]
classifier(sequence, candidate_labels)
output:
~/.local/lib/python3.8/site-packages/torch/cuda/init.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from Official Drivers | NVIDIA (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
Some weights of the model checkpoint at joeddav/xlm-roberta-large-xnli were not used when initializing XLMRobertaForSequenceClassification: [‘roberta.pooler.dense.weight’, ‘roberta.pooler.dense.bias’]
This IS expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
pip install torch==1.7.0+cpu
ERROR: Could not find a version that satisfies the requirement torch==1.7.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1)
ERROR: No matching distribution found for torch==1.7.0+cpu
The basic transformer command also showed the CUDA warning, but gave correct output.
python -c “from transformers import pipeline; print(pipeline(‘sentiment-analysis’)(‘we love you’))”
If someone knows how to fix this, please let me know. Thanks!
pip install torch==1.7.0+cpu
ERROR: Could not find a version that satisfies the requirement torch==1.7.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1)
ERROR: No matching distribution found for torch==1.7.0+cpu
I would advice you to just set up a fresh python environment using the latest conda-distribution and then do pip install transformers and it should work. I have set up multiple non-GPU VMs (both Amazon Linux and Ubuntu) in the last week that way and it has worked without any mucking around with either torch or cuda-versions.
For those interested: the pip command depends on your environment (python version and operating system) and it is not the same for Linux, Mac, or Windows. Check the right boxes for your OS and choose “None” in the CUDA part on this website. It will give you the right installation command (you do not need vision and audio so you can remove that from the command).
$ conda install pytorch-cpu -c pytorch
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:
If python is on the left-most side of the chain, that’s the version you’ve asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.
So know I need to figure out how to downgrade my Python install with env or whatever.
export PYTHONPATH=/usr/bin/python3.6
alias python=python3.6
$ python --version
Python 3.6.12
However conda still uses the latest Python version on my system. Don’t yet know where that is configured. The conda install still fails and prints: “Your python: python=3.8”
I think you’re leaving off the last part - to install torch you need to select the correct config from Start Locally | PyTorch and then run the command it tells you i.e,
Thanks, I got it running now without GPU complaints!
Although the code is not producing output yet:
from transformers import pipeline
classifier = pipeline(“zero-shot-classification”, model=‘joeddav/xlm-roberta-large-xnli’)
sequence = “За кого вы голосуете в 2020 году?”
candidate_labels = [“Europe”, “public health”, “politics”]
classifier(sequence, candidate_labels)
output:
Some weights of the model checkpoint at joeddav/xlm-roberta-large-xnli were not used when initializing XLMRobertaForSequenceClassification: [‘roberta.pooler.dense.weight’, ‘roberta.pooler.dense.bias’]
This IS expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Perhaps I am doing somthing wrong here, I will dig some more later.
I’ve not quite got to that stage yet but I think it’s normal? My understanding is you pre train say XLMRobertaForMaskedLM which creates a model consisting of an encoder with an MLM head and no pooling layer. Once that’s trained you load it as a XLMRobertaForSequenceClassification which copies the trained encoder and adds pooling and a sequence classifier - you then need to fine-tune this?