Hosted Inference API: Error loading tokenizer Can't load config

Hi,

https://huggingface.co/sampathkethineedi/industry-classification-api

I uploaded my classification model fine tuned on BERT. There is no issue running the model from the hub and using the ‘sentiment-analysis’ pipeline. But there seems to be some problem when it comes to Hosted Inference API.

Can someone help me with this?

Thanks!

maybe @mfuntowicz or @julien-c can help?

This has been answered on GitHub

1 Like