Is_split_into_words in encode_plus

See code, transformers/tokenization_utils.py at ae54e3c3b18bac0832ad62ea9b896dfd52a09850 · huggingface/transformers · GitHub

In _encode_plus function, it invokes self.tokenize(t, is_split_into_words=True, **kwargs), but tokenize() doesn’t use is_split_into_words.

See code,

It’s very weird.