Why is llama 2 model Size after Lora finetune is too large?

I have finetuned llama7B, 13B and zypher 7B

after merging the lora adaptor the model size became too big (6 checkpoints, the original were 2 )

can i reduce this to go to original size?
and can I finetune the Lora finetuned model ?

#LLM #Llama

solved by adding torch_dtype=torch.bfloat16 :slight_smile: