Updating model and tokenizers inside Trainer.train

Hi,
I am working on adapting vocabulary for LLMs. I want to know is there any way to update model and tokenizer inside the trainer.train function or do I need to write the standard pytorch train loop to fine-tune?

PS. This needs to be done along with LoRA as we are dealing with fine-tuning LLMs.
Planning to do something similar to what is described in the work