What will happen when add token to tokenizer

What is the difference between, when we add any vocab directly in tokenizer and when we have the regular?

tokenizer.tokenize("pedestration")
['pe', '##des', '##tra', '##tion']

tokenizer.add_tokens("pedestration")
tokenizer.tokenize("pedestration")
['pedestration']