How to run single-node, multi-GPU training with HF Trainer and deepspeed?

Hello,

I am new to LLM fine-tuning. I am working on a LoRA adaptation of a ProtT5 model.

Initially, I successfully trained the model on a single GPU, and now I am attempting to leverage the power of four RTX A5000 GPUs (each with 24GB of RAM) on a single machine. My objective is to speed-up the training process by increasing the batch size, as indicated in the requirements of the model I’m training provided here. However, despite my efforts, I cannot increase the batch size more than if I used one GPU.

According to what I’ve read (HuggingFace doc), deepspeed automatically identifies the GPUs and as I have stage 2 zero optimisation (see config below) the memory used in training of each gpu should be lower than if I only use one gpu, however it’s not the case.

I am using HuggingFace Trainer through which I pass the following deepspeed config :

ds_config = {
    "fp16": {
        "enabled": "auto",
        "loss_scale": 0,
        "loss_scale_window": 1000,
        "initial_scale_power": 16,
        "hysteresis": 2,
        "min_loss_scale": 1
    },

    "optimizer": {
        "type": "AdamW",
        "params": {
            "lr": "auto",
            "betas": "auto",
            "eps": "auto",
            "weight_decay": "auto"
        }
    },

    "scheduler": {
        "type": "WarmupLR",
        "params": {
            "warmup_min_lr": "auto",
            "warmup_max_lr": "auto",
            "warmup_num_steps": "auto"
        }
    },

    "zero_optimization": {
        "stage": 2,
        "offload_optimizer": {
            "device": "cpu",
            "pin_memory": True
        },
        "allgather_partitions": True,
        "allgather_bucket_size": 2e8,
        "overlap_comm": True,
        "reduce_scatter": True,
        "reduce_bucket_size": 2e8,
        "contiguous_gradients": True
    },

    "gradient_accumulation_steps": "auto",
    "gradient_clipping": "auto",
    "steps_per_print": 2000,
    "train_batch_size": "auto",
    "train_micro_batch_size_per_gpu": "auto",
    "wall_clock_breakdown": False
}

I launch my training script with the following command deepspeed --master_port "$master_port" train_LoRA.py "$current_directory" "$num_gpus" where “$current_directory” “$num_gpus” are just some variables that I use in my training script (to load data and print the setup). The ds_config is already in the training script under a dictionary form.

This is what I get during training (the speed of iteration is exactly the same as in one GPU).

Would you have any idea of what could be the problem?

I have read documentation for days but don’t get where it could come from …

I found the same phenomenon. This answer is useful for me: [Dreambooth] Multi-GPU training with accelerate is magnitudes slower than single GPU (non-flax) · Issue #1734 · huggingface/diffusers · GitHub. Because I do not fix the number of training samples but fix the training steps. Actually in each training step it will train num_gpus*batch_size samples.