I am using a BERT based model. I have fine-tuned on a dataset for few epochs and saved the trained model.
I am loading my tokenizer and my model from the saved checkpoint and I could see that the tokenization happening at this checkpoint is different from the one that is not fine-tuned.
I am not sure if this is the expected behaviour.
Thanks.