ByT5 tokenizer/embedding confusion from description

Here (attached in the picture), and in the original paper it is stated that last 100 bytes are “reused” to be sentinel tokens.

. Thus, I expect that a hundred bytes starting from 258 (since there are 2 additional special tokens) to 159 are sentinels.

However, upon inspecting the embeddings and the tokenizer I see that there are in fact 384 tokens, around 126 of which are sentinels. Perhaps I misunderstood something but why the discrepancy?