Setting up separate device for validation in Trainer?

Hi,

I am performing multi-GPU training, and even with batch_size = 1, the GPU memory is almost full.
That’s why when it comes to the validation stage, I get CUDA OOM error.
Is there a way to set up one device for validation only? Like “cuda:7” for validation and others from 0 to 6 for training?

I have looked in the documentation and asked ChatGPT, but no luck.