Hi there, I’m trying to build a QA system with gpt-neo model.
I’m using the transformers library:
from transformers import pipeline, AutoTokenizer, AutoModelForQuestionAnswering
model = AutoModelForQuestionAnswering.from_pretrained("EleutherAI/gpt-neo-2.7B")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
nlp = pipeline(model=model, tokenizer=tokenizer, task="question-answering")
I’ve got the following error:
ValueError: Unrecognized configuration class <class 'transformers.models.gpt_neo.configuration_gpt_neo.GPTNeoConfig'> for this kind of AutoModel: AutoModelForQuestionAnswering.
Model type should be one of RoFormerConfig, BigBirdPegasusConfig, BigBirdConfig, ConvBertConfig, LEDConfig, DistilBertConfig, AlbertConfig, CamembertConfig, BartConfig, MBartConfig, LongformerConfig, XLMRobertaConfig, RobertaConfig, SqueezeBertConfig, BertConfig, XLNetConfig, FlaubertConfig, MegatronBertConfig, MobileBertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, LxmertConfig, MPNetConfig, DebertaConfig, DebertaV2Config, IBertConfig.
So, it seems that this model type is not compatible with the question-answering pipeline. Is there a way to add this model type? What can I do to workaround this?