Proper way of saving/loading models for complex workflows

I think the base model path in the PEFT adapter configuration may be pointing to the model on the hub. How about like this?

    peft_config = LoraConfig(
        r=4,
        lora_alpha=32,
        task_type=TaskType.SEQ_CLS,
        target_modules="all-linear"
    )
    model = get_peft_model(base_model, peft_config)

    model.save_pretrained(path_to_dir)

    # Overwrite adapter_config to point to local base model
    peft_cfg = PeftConfig.from_pretrained(path_to_dir)
    peft_cfg.base_model_name_or_path = str(path_to_dir)
    peft_cfg.save_pretrained(path_to_dir)

    model.base_model.save_pretrained(path_to_dir)
    tokenizer.save_pretrained(path_to_dir)

References