ERROR: Could not find a version that satisfies the requirement torch==1.7.1+cpu

I’m (a NLP newbie) trying to use the zero-shot models on a system without a GPU. None of the models seem to work. Can this work without a CPU?

example code:
from transformers import pipeline
classifier = pipeline(“zero-shot-classification”, model=‘joeddav/xlm-roberta-large-xnli’, device=-1)
sequence = “За кого вы голосуете в 2020 году?”
candidate_labels = [“Europe”, “public health”, “politics”]
classifier(sequence, candidate_labels)

output:

~/.local/lib/python3.8/site-packages/torch/cuda/init.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
Some weights of the model checkpoint at joeddav/xlm-roberta-large-xnli were not used when initializing XLMRobertaForSequenceClassification: [‘roberta.pooler.dense.weight’, ‘roberta.pooler.dense.bias’]

  • This IS expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

Hi @waldenn

All the models and pipelines can work on CPU.
What you posted are warnings which you can safely ignore.

I think you have installed torch cuda versio on CPU machine, which is giving you the first warning

1 Like

Thanks, but I followed the docs saying:

Alternatively, for CPU-support only, you can install :hugs: Transformers and PyTorch in one line with:
pip install transformers[torch]

I previously also did:
pip install transformers[tf-cpu]

What do I need to install / configure to make the CPU-only work for zero-shot learning?

FYI: The “pip list” output:
Package Version


absl-py 0.11.0
apsw 3.28.0.post1
apturl 0.5.2
astunparse 1.6.3
beautifulsoup4 4.8.2
blinker 1.4
Brlapi 0.7.0
cachetools 4.1.1
certifi 2019.11.28
chardet 3.0.4
chrome-gnome-shell 0.0.0
cliapp 1.20180812.1
Click 7.0
cmdtest 0.32+git
colorama 0.4.3
command-not-found 0.3
cryptography 2.8
css-parser 1.0.4
cssselect 1.1.0
cssutils 1.0.2
cupshelpers 1.0
dataclasses 0.6
dbus-python 1.2.16
defer 1.0.6
distro 1.4.0
distro-info 0.23ubuntu1
dnspython 1.16.0
entrypoints 0.3
feedparser 5.2.1
filelock 3.0.12
fire 0.3.1
future 0.18.2
gast 0.3.3
google-auth 1.23.0
google-auth-oauthlib 0.4.2
google-pasta 0.2.0
grpcio 1.34.0
h5py 2.10.0
html2text 2020.1.16
html5-parser 0.4.9
html5lib 1.0.1
httplib2 0.14.0
httptools 0.1.1
idna 2.8
ifaddr 0.1.6
joblib 0.17.0
Keras-Preprocessing 1.1.2
keras2onnx 1.7.0
keyring 18.0.1
language-selector 0.1
launchpadlib 1.10.13
lazr.restfulclient 0.14.2
lazr.uri 1.0.3
louis 3.12.0
lxml 4.5.0
macaroonbakery 1.3.1
Markdown 3.1.1
mechanize 0.4.5
msgpack 0.6.2
netifaces 0.10.4
nose 1.3.7
numpy 1.19.4
oauthlib 3.1.0
olefile 0.46
onnx 1.8.0
onnxconverter-common 1.7.0
opt-einsum 3.3.0
packaging 20.3
pexpect 4.6.0
Pillow 7.0.0
pip 20.0.2
protobuf 3.6.1
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycairo 1.16.2
pychm 0.8.6
pycups 1.9.73
Pygments 2.3.1
PyGObject 3.36.0
PyJWT 1.7.1
pymacaroons 0.13.0
PyNaCl 1.3.0
pyparsing 2.4.6
PyQt5 5.14.1
PyQtWebEngine 5.14.0
pyRFC3339 1.1
python-apt 2.0.0+ubuntu0.20.4.2
python-dateutil 2.7.3
python-debian 0.1.36ubuntu1
pytz 2019.3
pyxdg 0.26
PyYAML 5.3.1
regex 2019.8.19
reportlab 3.5.34
repoze.lru 0.7
requests 2.22.0
requests-oauthlib 1.3.0
requests-unixsocket 0.2.0
Routes 2.4.1
rsa 4.6
sacremoses 0.0.43
SecretStorage 2.3.1
sentencepiece 0.1.94
setuptools 45.2.0
simplejson 3.16.0
sip 4.19.21
six 1.14.0
soupsieve 1.9.5
systemd-python 234
tensorboard 2.4.0
tensorboard-plugin-wit 1.7.0
tensorflow-cpu 2.3.1
tensorflow-estimator 2.3.0
termcolor 1.1.0
tokenizers 0.9.4
torch 1.7.0
tqdm 4.54.1
transformers 4.0.1
ttystatus 0.38
typing-extensions 3.7.4.3
ubuntu-advantage-tools 20.3
ubuntu-drivers-common 0.0.0
ufw 0.36
unattended-upgrades 0.1
urllib3 1.25.8
wadllib 1.3.3
webencodings 0.5.1
WebOb 1.8.5
Werkzeug 1.0.1
wheel 0.34.2
wrapt 1.12.1
xkit 0.0.0
zeroconf 0.24.4

You could just uninstall the torch gpu version and the install cpu one

pip uninstall torch to uninstall
and
pip install torch==1.7.1+cpu to install torch cpu

Ok, I for the second command I get this error:

pip install torch==1.7.0+cpu
ERROR: Could not find a version that satisfies the requirement torch==1.7.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1)
ERROR: No matching distribution found for torch==1.7.0+cpu

$ python --version
Python 3.8.5

I get the same error for the 1.7.1 version.

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal

these are warnings, are you able to see the output of classifier(sequence, candidate_labels) ?

No, not for the zeroshot learning code.

The basic transformer command also showed the CUDA warning, but gave correct output.
python -c “from transformers import pipeline; print(pipeline(‘sentiment-analysis’)(‘we love you’))”

If someone knows how to fix this, please let me know. Thanks!

pip install torch==1.7.0+cpu
ERROR: Could not find a version that satisfies the requirement torch==1.7.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1)
ERROR: No matching distribution found for torch==1.7.0+cpu

I would advice you to just set up a fresh python environment using the latest conda-distribution and then do pip install transformers and it should work. I have set up multiple non-GPU VMs (both Amazon Linux and Ubuntu) in the last week that way and it has worked without any mucking around with either torch or cuda-versions.

For those interested: the pip command depends on your environment (python version and operating system) and it is not the same for Linux, Mac, or Windows. Check the right boxes for your OS and choose “None” in the CUDA part on this website. It will give you the right installation command (you do not need vision and audio so you can remove that from the command).

1 Like

Thanks. I installed conda and entered:

conda install pytorch-cpu -c pytorch

$ conda install pytorch-cpu -c pytorch
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed

UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:

Specifications:

  • pytorch-cpu -> python[version=’>=2.7,<2.8.0a0|>=3.7,<3.8.0a0|>=3.5,<3.6.0a0|>=3.6,<3.7.0a0’]

This command fails with:

Your python: python=3.8

If python is on the left-most side of the chain, that’s the version you’ve asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.

So know I need to figure out how to downgrade my Python install with env or whatever.

Installed Python 3.6 also (using an external apt-repository) and configured it to use those paths:

sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get install python3.6

export PYTHONPATH=/usr/bin/python3.6
alias python=python3.6

$ python --version
Python 3.6.12

However conda still uses the latest Python version on my system. Don’t yet know where that is configured. The conda install still fails and prints: “Your python: python=3.8”

Anyone has good ideas how to match the required Python version better on Ubuntu 20.04 LTS?

I think you’re leaving off the last part - to install torch you need to select the correct config from https://pytorch.org/get-started/locally/ and then run the command it tells you i.e,

pip install torch==1.7.1+cpu -f https://download.pytorch.org/whl/torch_stable.html

The last argument is important as it tells pip where the wheels are

I would also recomend using pyenv and pyenv-virtualenv, it’ll allow you to install whatever version of python you want and create isolated environments - see https://github.com/pyenv/pyenv-installer

1 Like

Thanks, I got it running now without GPU complaints!

Although the code is not producing output yet:

from transformers import pipeline
classifier = pipeline(“zero-shot-classification”, model=‘joeddav/xlm-roberta-large-xnli’)
sequence = “За кого вы голосуете в 2020 году?”
candidate_labels = [“Europe”, “public health”, “politics”]
classifier(sequence, candidate_labels)

output:

Some weights of the model checkpoint at joeddav/xlm-roberta-large-xnli were not used when initializing XLMRobertaForSequenceClassification: [‘roberta.pooler.dense.weight’, ‘roberta.pooler.dense.bias’]

  • This IS expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

Perhaps I am doing somthing wrong here, I will dig some more later.

I’ve not quite got to that stage yet but I think it’s normal? My understanding is you pre train say XLMRobertaForMaskedLM which creates a model consisting of an encoder with an MLM head and no pooling layer. Once that’s trained you load it as a XLMRobertaForSequenceClassification which copies the trained encoder and adds pooling and a sequence classifier - you then need to fine-tune this?