How should I modify the attention layer of the model and save the modified model? I want to know if this approach is feasible and whether there will be any problems(There are new parameters in my attention layer.).
modeling_qwen2.QWEN2_ATTENTION_CLASSES["sdpa"] = MyQwen2Attn
modeling_qwen2.Qwen2SdpaAttention = MyQwen2Attn
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct" )
model.save_pretrained("./model")