Memory requirements for DPO 7b model

I wanted to know what is the memory requirements for DPO fine-tuning a 7b model using lora and quantization. What is the memory footprint for a simple DPO training .

1 Like