Model size doubles after finetuning

I noticed that deberta-v3-small(~273 MB) and MiniLM-L6-H768(~157 MB) model size increased by a lot to 541MB and 313MB respectively post finetuning.
More details :
AutoModelForSequenceClassification(5 labels)
used default trainer and training args with minor changes
warmup = 0.3
decay = 0.03
32 bs
I am having trouble understanding why, does anyone have any clue?