Error running GPT-NEO on local machine

Hi, I’m trying to run GPT-NEO through the hugging-face interface.

from transformers import pipeline
generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B')

Error: -

```
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/dpacman/anaconda3/envs/tf-gpu/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 371, in pipeline
    framework, model = infer_framework_from_model(model, targeted_task, revision=revision, task=task)
  File "/home/dpacman/anaconda3/envs/tf-gpu/lib/python3.8/site-packages/transformers/pipelines/base.py", line 90, in infer_framework_from_model
    model = model_class.from_pretrained(model, **model_kwargs)
  File "/home/dpacman/anaconda3/envs/tf-gpu/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 382, in from_pretrained
    raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.gpt_neo.configuration_gpt_neo.GPTNeoConfig'> for this kind of AutoModel: TFAutoModelForCausalLM.
Model type should be one of BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig.
```

GPT-Neo is only available for PyTorch, not TensorFlow.

1 Like