Hello,
My code using the mapping functions word_to_tokens()
and word_ids()
While using deberta-v2/v3
tokenizers I am getting the error msg:
raise ValueError("word_to_tokens() is not available when using Python based tokenizers")
ValueError: word_to_tokens() is not available when using Python based tokenizers
python-BaseException
Any workaround for that?
Thank in advance,
Shon