In my model card, I used to be able to run the hosted inference successfully, but recently it prompted an error: "<mask>" must be present in your input.
My model uses RoBERTa MLM and BERT Tokenizer. So the mask token is actually “[MASK]”. I have already set it in tokenizer_confg.json
but the inference API still mismatches with that.
In the past it is OK but recently it turns to prompt an error. Seems like the front-end start to double-check the mask token. How can I set the mask token in an appropriate way? Is it documented to set mask token in inference API?
Thanks!
To reproduce
Steps to reproduce the behavior:
- Go to ethanyt/guwenbert-base · Hugging Face
- Run an example with “[MASK]”
Expected behavior
In the past it was OK. See snapshot in guwenbert/README_EN.md at main · Ethan-yt/guwenbert · GitHub