Special_tokens_mask

Hi there, when training tokenizer from scratch, after encoding text, I caught a weird special_tokens_mask. Not as I expected, the special_tokens_mask is not [1, 0, 0, 0, 0, 0, 0, 0, 0, 1] . Bellow is my script. Can anyone help me out? Thank you very much!