Same sequence maps to different token ids

Is it possible to use the tokenizers package to create a custom tokenizer that can switch it the behavior of the string(token) to id (stoi) when it encounters a special token?
For example, the sequence ABC exists multiple times in the same example, but the appearance of the sequence before and after the modification token has a different underlying meaning. Accordingly, the tokenizer maps the ABC before the mod token to different ids than the ids after and the back again.
Ex of tokenization

input_ids: 0 1 2 3 4 5 6 7 8 1 2 3 9
A larger issue is how would you synchronize the tokenizers, so that the ids for special tokens line up, but the rest are shifted appropriately? I assuming tokenizer has 2 different stoi and itos…