Setting requires_grad=False seems not saving GPU memory usage

related to this topic,

from transformers import (
AutoModelForObjectDetection,
)
model = AutoModelForObjectDetection.from_pretrained(
‘jozhang97/deta-swin-large’,
)
for n, i in model.model.backbone.named_parameters():
# i.eval()
i.requires_grad = False
model.model.level_embed.requires_grad = False

However, training with transformers trainer is not reducing GPU memory. Both requires_grad = False give both 23GB usage of VRAM.
Any recommendation? Or is this related accelerate?