I have noticed the updates and now I have a problem about loading my sentencepiece tokenizer.
When I compute some of my models in the hugging face UI, I got the following error:
Couldn’t instantiate the backend tokenizer from one of: (1) a
tokenizerslibrary serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
I tried to save my tokenizer again after installing transfomers and sentencespiece like:
tok = T5Tokenizer.from_pretrained(“my_spm.model”)
But this doesn’t solve my problem.
Any idea what should I do?