I trained a Chinese Roberta model. In the model card, the widget uses a tokenizer defined in config.json( RobertaTokenizer
). But my model uses BertTokenizer
. Can I customize the tokenizer in the widget of the model card just like I can choose any combination of model and tokenizer in a pipeline?
I tried to use BertModel
instead of RobertaModel
(copy weights from Roberta to Bert). But the position embedding is different. And the outputs are different… So I have to use this combination of RobertaModel
and BertTokenizer
. Is that mean I can’t use the inference widget?