How to handle impossible Q&A model question when using the Inference API?

When using the transformers question-answer pipeline, there is a way to allow the model to indicate the answer is not found in the context. This is done by passing the parameter handle_impossible_answer=True to the pipeline.

The issue is that the hosted Inference API defaults handle_impossible_answer to False and there is no way that I’m aware of to change this to True. This makes it impossible to deploy Q&A models trained to handle impossible answers.

Found answer: you can pass these settings in your Model card. I didn’t realize I need to add text below my model card metadata for it to actually activate those settings.