Hello everyone,
I am trying to finetune a LLaMA based model on my local server.
The issue I have is that after calling trainer.train(), the number of keys in trainer.model.state_dict() reduced. I noticed that all the missing keys are end with .bias
.
I would appreciate it if you guys could suggest any solutions.
Thank you.