Single token decoder collisions

I am using the LlamaTokenizerFast tokenizer, obtained via:


I noticed that tokenizer.decode() sometimes generates collisions on single tokens. That is, for some distinct integers (not integer lists!) x and y, tokenizer.decode([ x ]) == tokenizer.decode([ y ]). I was wondering whether this is intended behavior and where can I learn more about it. Thank you.