My company has a 6-gpu server. However, I only want to train the model on a specific number of gpus (maybe 1 or 2).
How can I achieve that?
What are you using for training ? Trainer
or the pytorch-lightning
?
You can set this env variable
CUDA_VISIBLE_DEVICES=0,1
specify the devices that you want to use and the Trainer
will only use those cuda devices.
For pytorch-lightninig
here’s official doc
1 Like
Thank you. This is exactly what I need
Adding these two lines at the top of a Notebook (or Python script) can limit the devices to use for that script.
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0,2"