Missing Endpoint for NLI

I think we are missing an API Endpoint for NLI Models like facebook/bart-large-mnli · Hugging Face - So that we can get the entailment scores of an arbitrary hypothesis. Is that correct?

Thank you for any insights!

You can change the model task when deploying the model from zero-shot-classification to text-classification.

Thank you, but how does the model know what the premise is and what the hypothesis is? I thought that this is an implementation detail of the model, and so an API that would abstract that away seems to be missing.

I found the following thread: How to separate premise and hypothesis in Inference API - It discusses this exact problem. From that thread, it looks to me like its different from model to model.

Thank you again for any insights!

If you need more customization you can create a custom handler and define those manually: Create custom Inference Handler

Okay, I realize I probably have used the wrong subforum/category. What category should I use for asking about the free Inference API?

As I understand it, for the free Inference API there is a fixed set of tasks:

And what task a particular repository can be used for, depends on the pipeline_tag set in the README.md.

But looking at the above link, for NLI there is no such tag one could use. There is only one for text-classification, but then again the format required for the NLI Task is unknown. So I feel like an Endpoint is missing there?

Thank you