Trainer attribute, n_gpu

In a ddp training with multiple gpus, what should the value of n_gpu be?

In my setting, the n_gpu is always 1 no matter how many gpus are used. I am worried about whether this indicates a bug.

framework: pytorch, GPU type: nvidia-a100