Word_to_tokens() and word_ids() ---- microsoft/deberta-v2/v3

Hello,

My code using the mapping functions word_to_tokens() and word_ids()

While using deberta-v2/v3 tokenizers I am getting the error msg:

    raise ValueError("word_to_tokens() is not available when using Python based tokenizers")
ValueError: word_to_tokens() is not available when using Python based tokenizers
python-BaseException

Any workaround for that?

Thank in advance,
Shon

1 Like

I am facing a similar issue with Flaubert’s Tokenizer where word_ids() Method is not working and I a shown the same error as you are! If you have found a workaround already; do a post with an update!

No… I didn’t. Waiting for any help from here