Hello,
when I fine-tune my BERT model on our company’s server I nearly take on all our capacity. Is there any way to reduce the number of used cores?
Hello,
when I fine-tune my BERT model on our company’s server I nearly take on all our capacity. Is there any way to reduce the number of used cores?
what does the number of used cores
mean?
what dl
library you use? pytorch or tensorflow
i assume you fine-tune with pytorch, and the cores
means gpu device.
generally, there are two ways to limit the gpu usage:
visible device
environment variable# n means your gpu device id
export CUDA_VISIBLE_DEVICES=0,1,2,..n``
import torch
torch.cuda.set_device(1)
tensorflow provide similar configure options, you can look the official document for reference