Pipeline cannot infer suitable model classes

When I try to use the inference pipeline widget for my private project, I got this error.

But the prediction is working when I do locally (gives back proper labels)

Pinging @Narsil

1 Like

Seems to be working now, did you fix anything particular ?

Edit: Looked at the wrong repo I guess.

1 Like

Hi. Yes, it is working now. Thanks for that. But now, it is only displaying 5 of the classification labels (for context, there are 14 labels). The labels are not shown as top 5 predictions, but the first 5 labels defined in config.json. Is there a way in the pipeline to define a number of classification labels as percentages from highest to lowest? Reference
Can we manually override top_k value from README.md?

Hi @Rajaram1996

The predictions should always show the 5 top scores, can you provide a link to the wrong behavior ?


When I use the hosted inference widget, I am only getting the confidence score of 5 random emotions which are not the top_5 values. I am adding image for reference.

When I predict with the model on my local inference, I get “female_happy 0.98” or 98% (image attached below) which is supposed to be the correct prediction (in total there are 14 classes to predict from).

The API is powered by the pipelines, it seems you’re trying a private model so I don’t really have access, but you can check out

pipe = pipeline(model="NR/myprivate_model")

It should work out of the box and produce the same thing as the API.

Hey. Thanks for working through this. You can check the model here. Even that pipeline snippet, which I tried in my colab notebook gave me this:
[{'label': 'male_disgust', 'score': 0.07790017127990723}, {'label': 'female_sad', 'score': 0.07736612856388092}, {'label': 'female_neutral', 'score': 0.07557496428489685}, {'label': 'female_angry', 'score': 0.07553239166736603}, {'label': 'female_disgust', 'score': 0.07268150895833969}]
Let me know if you need more info.

@Narsil the model I want to use is public but doesn’t have an appropriate pipeline. What are my options for using it in inferencing? Figured it out.

@Rajaram1996 ,

I just checked your model, and it seems it’s using

'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias', 'classifier.dense.weight'

instead of

'classifier.bias', 'classifier.weight', 'projector.weight', 'projector.bias'

when loading (check the warings), which leads to random outputs no matter the input.
Try renaming the tensors appropriately for it to work.

So most likely, the pipeline is not broken but it just seems like the weights are incorrect so the head gives garbage output.

Thank you for that information. I will check it out.