Thank you so much @g3casey, I did not notice that the document has already given a note.
However, there is another problem that arise, I did correct my tokenizer loading with the new parameter of return_offsets_mapping
tokenized_inputs = tokenizer(examples['tokens'], truncation=True, is_split_into_words=True,return_offsets_mapping=True).

And I also confused that in case of not set True for return_offsets_mapping in the training progress, how can I get that high result… ![]()