How to set trust_remote_code=True for prompt-tuning fine-tuning for local deployment models

How to set trust_remote_code=True for prompt-tuning fine-tuning for local deployment models?
Here is my code:

MODEL_PATH = r"mypath"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, low_cpu_mem_usage=True, trust_remote_code=True)
config = PromptTuningConfig(task_type=TaskType.CAUSAL_LM,
prompt_tuning_init=PromptTuningInit.TEXT,
prompt_tuning_init_text=“Input”,
num_virtual_tokens=len(tokenizer(“Input”)[“input_ids”]),
tokenizer_name_or_path=MODEL_PATH,
)
model = get_peft_model(model, config)

“ValueError: Loading mypath requires you to execute the tokenizer file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.”
I have set up the trust before but I still get this error message when loading the peft model, what should I do?

I just want to use prompt tuning to train chatglm2 and have the same question.