DeepSpeed Zero 3 with LoRA - Merging adapters

Hi!

I’m having troubles merging LoRA adapters trained with DeepSpeed Zero 3, using the Trainer API. The model was finetuned on a single node with 4 GPUs. I am not sure if DeepSpeed with LoRA saves the full model and/or the adapters?

The code:

tokenizer = AutoTokenizer.from_pretrained(model_id,device_map='auto',torch_dtype=torch.bfloat16)                                         
model = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto', torch_dtype=torch.bfloat16)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, pathPeftModel)
model = model.merge_and_unload()

where pathPeftModel refers to the folder “checkpoint-28500”

Gives following error:

  File "…/ lib/python3.11/site-packages/peft/utils/save_and_load.py", line 444, in load_peft_weights
    adapters_weights = safe_load_file(filename, device=device)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "… /lib/python3.11/site-packages/safetensors/torch.py", line 308, in load_file
    with safe_open(filename, framework="pt", device=device) as f:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization

Deepspeed config used:

compute_environment: LOCAL_MACHINE
debug: true
deepspeed_config:
  gradient_accumulation_steps: 2
  gradient_clipping: 1.0
  offload_optimizer_device: cpu
  offload_param_device: cpu
  zero3_init_flag: true
  zero3_save_16bit_model: false
  zero_stage: 3
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

same error, do u solve the problem?