The model's parameters' names are inconsistent with the saved safetensors, which leads to loading failure

In phi-2 model, one parameter is named “model.layers.1.self_attn.query_key_value.weight”, while its counterpart in .safetensors is named “transformer.h.1.mixer.Wqkv.weight”, which I believe leads to the failed loading of .safetensors into model parameter. Is there a standard way to solve this name inconsistency? Any help will be appreciated. Thanks. My code is as follows:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("D:\\ML pls\\phi-2")
model = AutoModelForCausalLM.from_pretrained("D:\\ML pls\\phi-2")

The “D:\ML pls\phi-2” folder is exactly the same as the official one here microsoft/phi-2 at main