When using the transformers question-answer
pipeline, there is a way to allow the model to indicate the answer is not found in the context. This is done by passing the parameter handle_impossible_answer=True
to the pipeline.
The issue is that the hosted Inference API defaults handle_impossible_answer to False and there is no way that I’m aware of to change this to True. This makes it impossible to deploy Q&A models trained to handle impossible answers.