Size Mismatch error for LLM checkpoint of PEFT model with a resized token embeddings

I have started training the Llama 3.1 8b model using Unsloth. I made some changes in the code as I am training a new language data, i.e. added tokens to the tokenizer and resized token embeddings of the model. When I am loading the checkpoint with the transformer’s AutoModelForCausalLM, it gives me a size mismatch error. Can anyone explain this?