Maybe there is a bug in BertTokenizer?

I try to add a custom token in tokenizer.

I find this code in source code:

    Args:
        new_tokens (:obj:`List[str]`or :obj:`List[tokenizers.AddedToken]`):
            Token(s) to add in vocabulary. A token is only added if it's not already in the vocabulary (tested by
            checking if the tokenizer assign the index of the ``unk_token`` to them).
        special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
            Whether or not the tokens should be added as special tokens.

    Returns:
        :obj:`int`: The number of tokens actually added to the vocabulary.

    Examples::

        # Let's see how to increase the vocabulary of Bert model and tokenizer
        tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
        model = BertModel.from_pretrained('bert-base-uncased')

        num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2'])
        print('We have added', num_added_toks, 'tokens')
        # Note: resize_token_embeddings expects to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
        model.resize_token_embeddings(len(tokenizer))
    """
    new_tokens = [str(tok) for tok in new_tokens]

but after I run the code
tokenizer.add_tokens('ss##e',special_tokens=True)
there is no change in special_tokens

I have tried for serval times
图片

but it seem like is the same change weather True or False.

I notice that there is a special explain for albert.

    # Make sure we don't split on any special tokens (even they were already in the vocab before e.g. for Albert)
    if special_tokens:
        self.unique_no_split_tokens = sorted(set(self.unique_no_split_tokens).union(set(new_tokens)))
    else:
        # Or on the newly added tokens
        self.unique_no_split_tokens = sorted(set(self.unique_no_split_tokens).union(set(tokens_to_add)))

Is there a different between bert and albert that I don’t know? or there is someting wrong.