Finetune only certain embeddings

Hello, i want to fintetune a Deberta model for sequence classification with limited compute so i will need to freeze some layers. Through EDA i noticed that the presence of unknown tokens for Deberta Tokenizer is strongly correlated to labels, so i was planning to add some of them to the tokenizer. My question is if is it possible to freeze embeddings for known tokens while learning the representation of the new ones or if i just need to unfreeze the whole module. Thank you so much!