Hi @echarlaix,
I’m trying to download a private model from the hub through ORTModelForSequenceClassification
but it gives error messages (but the download works well with Transformers
as showed at the bottom of the post).
Do you know what is the problem between ORTModelForSequenceClassification
and revision
and use_auth_token
? Thank you.
error with revision
# install onnxruntime for optimum
!python -m pip install git+https://github.com/huggingface/optimum.git#egg=optimum[onnxruntime]
# parameters
API_TOKEN = "xxxx"
model_checkpoint = "orga/xxxx"
revision = 'v1.1'
# Load a model from transformers and export it through the ONNX format
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained(
model_checkpoint,
use_auth_token=API_TOKEN,
revision=revision,
from_transformers=True)
error with use_auth_token
# install onnxruntime for optimum
!python -m pip install git+https://github.com/huggingface/optimum.git#egg=optimum[onnxruntime]
# parameters
API_TOKEN = "xxxx"
model_checkpoint = "orga/xxxx"
revision = 'v1.1'
# Load a model from transformers and export it through the ONNX format
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained(
model_checkpoint,
use_auth_token=API_TOKEN,
from_transformers=True)
without error with the Transformers library
In order to check if the error came from my private model, I did run the following code without error.
# install transformers
!pip install transformers
# parameters
API_TOKEN = "xxxx"
model_checkpoint = "orga/xxxx"
revision = 'v1.1'
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained(
model_checkpoint,
revision=revision,
use_auth_token=API_TOKEN)