How can I use the models provided in huggingface.co/models?

Hi,

How can I use the models provided in Hugging Face – The AI community building the future.? For example, if I want to generate the same output as in the example for the hosted interface API for roberta-large-mnli · Hugging Face, how would I get the same result for a text I input?

I see that this is written in “use in transformers”:

from transformers import AutoTokenizer, AutoModelForSequenceClassification
  
tokenizer = AutoTokenizer.from_pretrained("roberta-large-mnli")

model = AutoModelForSequenceClassification.from_pretrained("roberta-large-mnli")

But I dont know how to apply the downloaded model to my own text?

I’ve tried given this to pipeline, classifier = pipeline('roberta-large-mnli') but its not a recognized model.

Any help here would be appreciated.

Thanks

hey @farazk86, you almost had it right with the pipeline - as described in the docs, you also need to provide the task with the model. in this case we’re dealing with text classification (entailment), so we can use the sentiment-analysis task as follows:

from transformers import pipeline

pipe = pipeline(task="sentiment-analysis", model="roberta-large-mnli")
pipe("I like you. </s></s> I love you.") # returns [{'label': 'NEUTRAL', 'score': 0.5168218612670898}]

If you want to see how to generate the predictions using the tokenizer and model directly, I suggest checking out the MNLI task in this tutorial: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb

2 Likes

Thanks for the reply @lewtun this allowed me to use pipeline with this model but it is not generating the predictions or labels as is shown on the demo or hosted page. There, on the same input of I like you. </s></s> I love you. It produces the following outputs:

CONTRADICTION

NEUTRAL

ENTAILMENT

with:

pipe = pipeline(task="sentiment-analysis", model="roberta-large-mnli")
pipe("I like you. </s></s> I love you.")

I am only getting the NEUTRAL output.

How do I get the same outputs as on the hosted API?

Thanks

you can get all the scores by passing return_all_scores=True to the pipeline as follows:

pipe = pipeline(task="sentiment-analysis", model="roberta-large-mnli", return_all_scores=True)

hth!

1 Like