How to calculate the memory required using Lora fine tuning

I want to apply Lora to the fine-tuning llam2 7B model, there are only 0.015% parameters added. But during training, I found it cost the same VRAM (53GB) as fully fine-turning without Lora.
Could somebody help me with this?