Word_ids not working with deberta_v2

Hello all,
Currently, I am working on a token classification. When I have tried to use word_ids function during tokenization, it gave me an error. Let me elaborate with the following example:

#train is a dict having the tokens and labels
import transformers
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v3-small", use_fast=True)
tokenized_input = tokenizer(train['tokens'][0], is_split_into_words=True)

Now, the problem is I want to use word_ids() function. Why? ā€œSpecial tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding word (several tokens will be mapped to the same word index if they are parts of that word)ā€.

When I call this function, the following error raises:

"word_ids() is not available when using Python-based tokenizers"

While using distillbert, it is working fine. Your help is appreciated!

Check this issue: DeBERTa V3 Fast Tokenizer Ā· Issue #14712 Ā· huggingface/transformers (github.com)