Issue with Flaubert Tokenizer as word_ids() method is not available for NER Task

I am working with Flaubert for Token Classification Task but when I am trying to compensate for difference in an actual number of labels and now a larger number of tokens after tokenization takes place; it’s showing an error that word_ids() method is not available. The method is available as I did dir(tokenized_input) and it is showing in the available list of methods but when I try to use it…

Error: word_ids() is not available when using python-based tokenizer.

For reference; Tokenizer - use of word_ids to map labels to newer tokens.

I am using Flaubert for Named Entity Recognition Task!

@lewtun

You can check this issue: DeBERTa V3 Fast Tokenizer · Issue #14712 · huggingface/transformers (github.com). I believe that it is a solution to your issue.