Hosted Inference API: Error loading tokenizer Can't load config


I uploaded my classification model fine tuned on BERT. There is no issue running the model from the hub and using the ‘sentiment-analysis’ pipeline. But there seems to be some problem when it comes to Hosted Inference API.

Can someone help me with this?


maybe @mfuntowicz or @julien-c can help?

This has been answered on GitHub

1 Like