Employing Different Tokenizers in a Translation Model

I am trying to fine tune a translation model, but I want to try different tokenizers. So, this means I will not be using the same tokenizer for both languages. How to proceed with that in terms of the preprocessing function, the data collating and the seq2seq training?