Single token decoder collisions

I am using the LlamaTokenizerFast tokenizer, obtained via:

tokenizer=AutoTokenizer.from_pretrained(“lmsys/vicuna-13b-v1.3”)

I noticed that tokenizer.decode() sometimes generates collisions on single tokens. That is, for some distinct integers (not integer lists!) x and y, tokenizer.decode([ x ]) == tokenizer.decode([ y ]). I was wondering whether this is intended behavior and where can I learn more about it. Thank you.