Task-specific fine-tuning of GPT2

Hi there :wave:

In the Seq2Seq examples (transformers/examples/legacy/seq2seq at master 路 huggingface/transformers 路 GitHub) why there is no mention of GPT-x? it seems to me that, it shouldn鈥檛 be difficult to fine-tune this model using GPT2LMHeadModel for particular text-to-text tasks.

Wondering if anyone has any thoughts on this.

Thanks!