Inference, checkpoint

Hello,
I’m in the process of fine-tuning a model with peft and LORA, is it possible to load the first checkpoint (knowing that the training is not finished) to make inference on it?

Checkpoint-1 contains :

  • adapter_config.json
  • adapter_model.safetensors
  • optimizer.pt README.md
  • rng_state.pth
  • scheduler.pt
  • special_tokens_map.json
  • tokenizer_config.json
  • tokenizer.model
  • trainer_state.json
  • training_args.bin

If i try :
model =AutoModelForCausalLM.from_pretrained(BASE_MODEL_PATH)
ft_model = PeftModel.from_pretrained(model,checkpoint_path)
Return :
lib/python3.10/site-packages/safetensors/torch.py", line 308, in load_file
with safe_open(filename, framework=“pt”, device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization

Thank’s