Hi there
In the Seq2Seq examples (transformers/examples/legacy/seq2seq at master · huggingface/transformers · GitHub) why there is no mention of GPT-x? it seems to me that, it shouldn’t be difficult to fine-tune this model using GPT2LMHeadModel
for particular text-to-text tasks.
Wondering if anyone has any thoughts on this.
Thanks!