Space token ' ' cannot be add when is_split_into_words = True

for example,

>>> tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
>>> tokenizer.add_tokens(' ')
1
>>> tokenizer.encode('你好 世界', add_special_tokens=False)
[872, 1962, 21128, 686, 4518]
>>> tokenizer.encode(['你','好',' ', '世', '界'], is_split_into_words=True, add_special_tokens=False)
 [872, 1962, 686, 4518]

Obviously, the blank token is ignored. But if you change it to another token like ‘[balabala]’, it works.
So what is the proper way to do this?

I found that one way is to use convert_tokens_to_ids, yet by which I cannot use the convenient features in encode and __call__ such as padding and automatically generating attention_mask