How to load an AutoNLP model

Hello,

I am trying to load an autoNLP model but having an issue.
I tried loading it in a pipeline, in this way:

model = pipeline("text2text-generation", model=“PATH”)

but I am getting the following error.

ValueError: Could not load model with any of the following classes: (<class ‘transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM’>, <class ‘transformers.models.auto.modeling_tf_auto.TFAutoModelForSeq2SeqLM’>, <class ‘transformers.models.mt5.modeling_mt5.MT5ForConditionalGeneration’>, <class ‘transformers.models.mt5.modeling_tf_mt5.TFMT5ForConditionalGeneration’>).

Is it actually possible to use the pipeline function if the model is not public?

Thanks,

Antoine

yes you can use private models if you use the use_auth_token argument.
More:

use_auth_token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running transformers-cli login (stored in ~/.huggingface). model_kwargs — Additional dictionary of keyword arguments passed along to the model’s from_pretrained(..., **model_kwargs) function. kwargs — Additional keyword arguments passed along to the specific pipeline init (see the documentation for the corresponding pipeline class for possible values).