SFTTrainer Merge LoRA weights back into base model?

Hello, I used this example trl/examples/scripts/sft.py at main · huggingface/trl · GitHub to create fine tuned model and saved it.

Usually, previously when I saved a peft model I needed merge_and_unload the weights back to the base model. But, I am able to load this fine tuned model using the from_pretrained method without issue.

Does it mean that we do not need to merge the peft model with base model any more if we use the trainer.save_model(output_dir) ?

Thanks.

2 Likes