How to use model card in case of multitask learning?

T5 (Text to Text Transfer Transformer) can be trained in multitask learning. In case we have such a model (suppose trained for Summarization (Summarize English[en])and translation task (translate English[en] to Deustch[de]). In that case, how should one prepare the model card that it would understand that the text given in the widget would be used for translation or summarization?

For a single task we do it something like this:

----------------------------
language: French Spanish  
tags:
- translation French Spanish  model
datasets:
- dataset1 dataset2
widget:
- text: "some French text"
----------------------------

The hub will probably assigns the Text2TextGeneration pipeline for T5 models, in that case it won’t use any prefix automatically. So you should put the prefix text for the tasks in the widget text. And add a note in the model card that to use the model for different task the user should put the prefix himself.

See for reference valhalla/t5-small-qa-qg-hl · Hugging Face

1 Like