When adding a new token in the vocabulary, there is a difference between Tokenizer and FastTokenizer.
from transformers import BartTokenizer, BartTokenizerFast
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
tokenizer_fast = BartTokenizerFast.from_pretrained('facebook/bart-large')
tokenizer.add_tokens("<NEW_TOKEN>")
tokenizer_fast.add_tokens("<NEW_TOKEN>")
sentence = "I added a <NEW_TOKEN> in the vocabulary."
print(tokenizer.encode(sentence))
# [0, 100, 355, 10, 50265, 179, 5, 32644, 4, 2]
print(tokenizer_fast.encode(sentence))
# [0, 100, 355, 10, 1437, 50265, 11, 5, 32644, 4, 2]
The fast tokenizer adds a space token before the <NEW_TOKEN> (1437) while the standard tokenizer removes the automatic space from the next token (179 vs. 11).