Couldn't instantiate the backend tokenizer

I have noticed the updates and now I have a problem about loading my sentencepiece tokenizer.
When I compute some of my models in the hugging face UI, I got the following error:

Couldn’t instantiate the backend tokenizer from one of: (1) a tokenizers library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one.

I tried to save my tokenizer again after installing transfomers and sentencespiece like:

from transformers
import T5Tokenizer
tok = T5Tokenizer.from_pretrained(“my_spm.model”)
tok.save_pretrained(path_to_tf_checkpoint)

But this doesn’t solve my problem.
Any idea what should I do?