Hi succeed to use my model (sentence transformer trained) locally, but when I push it online, it throws me an error which is
This model could not be loaded by the inference API.
My model is private, I succeed to call it from a local Python client with the
use_auth_token param, and it works very well.
Maybe it is an issue from HF, how can I make it working on my model card?