Hi Bram Vanroy, I was trying the above mentioned code with respect to BertTokenizer instead of AutoTokenizer but i get a error as mentioned below.
ValueError: word_ids() is not available when using Python-based tokenizers
Can you please let me know what changes to be made in code to get a list indicating the word corresponding to each token. Special tokens added by the tokenizer are mapped to None
and other tokens are mapped to the index of their corresponding word