Which model can I use fine tune on data for text generation given input-output pair?

Which PreTrainedModel and library from below can we use to fine tune on below data for text generation task and models takes both title and description as input while training and later the fine tuned model can be used to just generate description using the title

  • GPT-like (also called auto-regressive Transformer models)
  • BERT-like (also called auto-encoding Transformer models)
  • BART/T5-like (also called sequence-to-sequence Transformer models)

Data is as title-description pair dataset like below:

data = [ 
{ "title" : "apple" ,  "description" : "delicious fruit having anti-oxidants" } , 
{ "title" : "orange" ,  "description" : "delicious fruit rich in Vitamin C" } , 
{ "title" : "pineapple" ,  "description" : "delicious fruit rich in xyz" } 
  ]

My efforts: Though , I have seen it is recommended to use T5 , BART in docs but not sure how to use it for fine tuning

I also found this text generation example script
mentioned on ]GPT2 model page](OpenAI GPT2) to use for fine tuning for generating recipe but it uses only the instructions field and not the name of dish during fine tuning.

Appreciate any suggestions and useful links