Load the model first:
model = TFAutoModelForQuestionAnswering.from_pretrained("pathtomymodel")
Create the tokens:
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
inputs = tokenizer(question, context, return_tensors="tf")
question_answerer =pipeline('question-answering', model=model, tokenizer=tokenizer)
Run:
outputs = model(**inputs)
Got the following error:
raise ValueError(
ValueError: The following keyword arguments are not supported by this model: ['token_type_ids'].