Hi.
I am using
-
transformers
version: 4 - Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.12
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
I am running the pipeline example for question answering from the doc. It throws the following error:
Traceback (most recent call last):
File "c:/Workspace/py-conda-workspaces/py36-conda-speechDemo/text-question-answering.py", line 9, in <module>
result = nlp(question="What is extractive question answering?", context=context)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\transformers\pipelines.py", line 1874, in __call__
start, end = self.model(**fw_args)[:2]
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\transformers\models\distilbert\modeling_distilbert.py", line 706, in forward
return_dict=return_dict,
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\transformers\models\distilbert\modeling_distilbert.py", line 480, in forward
inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\transformers\models\distilbert\modeling_distilbert.py", line 107, in forward
word_embeddings = self.word_embeddings(input_ids) # (bs, max_seq_length, dim)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\torch\nn\modules\sparse.py", line 126, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\torch\nn\functional.py", line 1852, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.IntTensor instead (while checking arguments for embedding)
Here is the code, it is copy-pasted from Summary of the tasks — transformers 4.0.0 documentation (huggingface.co)
from transformers import pipeline
nlp = pipeline("question-answering")
context = r"""
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the examples/question-answering/run_squad.py script.
"""
result = nlp(question="What is extractive question answering?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
result = nlp(question="What is a good example of a question answering dataset?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
If it helps, I yesterday upgraded from v3.5.1 to v4, using pip install --upgrade transformers
. Since this upgrade, the environment seems damaged also in other places, for example transformer-cli --help
is broken (see a related issue at The question-answering example in the doc throws an AttributeError exception. Please help - Beginners - Hugging Face Forums )
Help will be appreciated. Thanks!