Reduce number of cores


when I fine-tune my BERT model on our company’s server I nearly take on all our capacity. Is there any way to reduce the number of used cores?

what does the number of used cores mean?
what dl library you use? pytorch or tensorflow

i assume you fine-tune with pytorch, and the cores means gpu device.
generally, there are two ways to limit the gpu usage:

  1. set visible device environment variable
# n means your gpu device id
export CUDA_VISIBLE_DEVICES=0,1,2,..n`` 
  1. directly set gpu device with pytorch library
import torch

tensorflow provide similar configure options, you can look the official document for reference

1 Like