Should we save the tokenizer state over domain adaptation?

I am going to do domain adaptation over my dataset with BERT. When I train the BERT model, should I save the tokenizer state? I mean, will tokenizer state change over training the model?

No, the tokenizer is not changed by the model fine-tuning on a new dataset.

1 Like