Can you fine tune a CausalLM model (GPT2) to seq2seq, redefining the architecture or do I need to retrain the model from scratch?

tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME_PATH)
model = AutoModelForCausalLM.from_config(model_config)

to try turn it into AutoModelSeq2SeqLM