Domain adaptation from Causal LM to a Seq2Seq model

Hey all, I’m trying to retrain the CodeGen model to generate code from prompts. It seems the model was initially trained to complete code in a Causal LM domain, similar to how GitHub Copilot works. I want to use this model to generate code from prompts IE: ‘function to perform this operation on a dataframe’ would return the code to perform this. I’m pretty sure that means it would need to be a Seq2Seq LM type, the issue is the Seq2Seq classes for fine-tuning don’t support Codegen as it’s a Causal LM. How can I fine-tune this model to perform a different task while retaining the models knowledge through transfer learning? There doesn’t seem to be any literature on this that I have come across.

Thanks!
John.