Question about using trainer with DeepSpeed

I use cmd below to run the program:

nohup deepspeed --include localhost:1 finetune_wav2vec2_for_ASR.py > finetune_wav2vec2_for_ASR.log 2>&1 &

the TrainingArguments:

training_args = TrainingArguments(
        output_dir=output_dir,
        group_by_length=True,
        per_device_train_batch_size=4,
        evaluation_strategy='epoch',
        save_strategy='epoch',
        num_train_epochs=1,
        fp16=True,
        do_eval=False,
        do_train=True,
        gradient_checkpointing=True,
        gradient_accumulation_steps=16,
        logging_steps=50,
        learning_rate=1e-4,
        weight_decay=0.005,
        warmup_steps=1000,
        save_total_limit=2,
        seed=seed,
        remove_unused_columns=False,
        local_rank=-1,
        deepspeed='./ds_config_zero2.json'
    )

the ds_config_zero2.json is copy from transformers, it looks like this:

{
    "fp16": {
        "enabled": "auto",
        "loss_scale": 0,
        "loss_scale_window": 1000,
        "initial_scale_power": 16,
        "hysteresis": 2,
        "min_loss_scale": 1
    },

    "bf16": {
        "enabled": "auto"
    },

    "optimizer": {
        "type": "AdamW",
        "params": {
            "lr": "auto",
            "betas": "auto",
            "eps": "auto",
            "weight_decay": "auto"
        }
    },

    "scheduler": {
        "type": "WarmupLR",
        "params": {
            "warmup_min_lr": "auto",
            "warmup_max_lr": "auto",
            "warmup_num_steps": "auto"
        }
    },

    "zero_optimization": {
        "stage": 2,
        "offload_optimizer": {
            "device": "cpu",
            "pin_memory": true
        },
        "allgather_partitions": true,
        "allgather_bucket_size": 2e8,
        "overlap_comm": true,
        "reduce_scatter": true,
        "reduce_bucket_size": 2e8,
        "contiguous_gradients": true
    },

    "gradient_accumulation_steps": "auto",
    "gradient_clipping": "auto",
    "steps_per_print": 100,
    "train_batch_size": "auto",
    "train_micro_batch_size_per_gpu": "auto",
    "wall_clock_breakdown": false
}

everything is OK before training, but when reach the training step, the program hangs on for several hours, anyone can help me, thanks a lot.