Need to set re_entrant to true with latest transformers

I’ve recently encountered an unexpected warning related to use_reentrant while working with theTraining arguments. According to my understanding and the documentation, this warning should not typically surface, especially when using the latest versions of the Transformers . However, I’m consistently seeing this warning in my environment, which is currently set up with Transformers version 4.39.3 and PyTorch version 2.3

inorder to overcome this issue I am using
gradient_checkpointing_kwargs={‘use_reentrant’:True} explicitly

Yeah that works or you you can just set it in torch/utils/checkpoint.py.
When setting it to false I go to oom. Peft/lora 4bit training goes from 4gb to 16gb. Memory leak it looks to me.