Unrecognized configuration class <class ‘transformers.models.t5.configuration_t5.T5Config’> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of CamembertConfig, XLMRobertaConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig, ReformerConfig, BertGenerationConfig, XLMProphetNetConfig, ProphetNetConfig.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = AutoTokenizer.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa")
model = AutoModelForCausalLM.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa").to(device)
def get_response(question, context, max_length=32):
input_text = 'question: %s context: %s' % (question, context)
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'].to(device),
attention_mask=features['attention_mask'].to(device),
max_length=max_length)
return tokenizer.decode(output[0])
# Some examples in different languages
context = 'HuggingFace won the best Demo paper at EMNLP2020.'
question = 'What won HuggingFace?'
get_response(question, context)
context = 'HuggingFace ganó la mejor demostración con su paper en la EMNLP2020.'
question = 'Qué ganó HuggingFace?'
get_response(question, context)
context = 'HuggingFace выиграл лучшую демонстрационную работу на EMNLP2020.'
question = 'Что победило в HuggingFace?'
get_response(question, context)
the issue is MT5 is a seq2seq model and seq2seq models should be loaded using AutoModelForConditionalGeneration or in this case directly using MT5ForConditionalGeneration class.