How to clear GPU memory with Trainer without commandline

Hi, I’m running a few small models in a loop in python in my jupyter notebook, but at the end of each loop, I get a cuda out of memory error and the only way so far to get around it is to restart the kernel, so the loop is useless. Is there a python command where I can clear CUDA memory after each model?

I have tried torch.cuda.empty_cache() but it doesn’t appear to be working, I end up with the same error.

5 Likes

We have some way to fix.
Can you use jupyter notebok to do (I’d been working on there) like this

!pip install GPUtil

import torch
from GPUtil import showUtilization as gpu_usage
from numba import cuda

def free_gpu_cache():
print(“Initial GPU Usage”)
gpu_usage()

torch.cuda.empty_cache()

cuda.select_device(0)
cuda.close()
cuda.select_device(0)

print("GPU Usage after emptying the cache")
gpu_usage()

free_gpu_cache()