Free up GPU memory after training is finished or interrupted (on Colab)

Hi,

I noticed that the GPU memory is not freed up after training is finished when using Trainer class. In addition, it’s not only the model is taking up memory because the occupied size is way larger than the model (opt-350m) I am using, see the screenshot here:

I am wondering what’s the recommended way to free up the GPU memory after training is finished or gets interrupted? Currently I am restarting session every time, which has some overhead. Thanks!

1 Like

Hi

One thing you can do is:

import gc
gc.collect()
torch.cuda.empty_cache()

This will free up some of the GPU space but it might not free up everything if some variables are still loaded inside the GPU in which case you need to find out what those variables are and explicitly delete them from the GPU MEM with the “del” command for instance:

del my model