I am working with
Flaubert for Token Classification Task but when I am trying to compensate for difference in an actual number of labels and now a larger number of tokens after tokenization takes place; it’s showing an error that word_ids() method is not available. The method is available as I did dir(tokenized_input) and it is showing in the available list of methods but when I try to use it…
word_ids() is not available when using python-based tokenizer.
For reference; Tokenizer - use of word_ids to map labels to newer tokens.
I am using Flaubert for Named Entity Recognition Task!