T5 (Text to Text Transfer Transformer) can be trained in multitask learning. In case we have such a model (suppose trained for Summarization (Summarize English[en])and translation task (translate English[en] to Deustch[de]). In that case, how should one prepare the model card that it would understand that the text given in the widget would be used for translation or summarization?
For a single task we do it something like this:
----------------------------
language: French Spanish
tags:
- translation French Spanish model
datasets:
- dataset1 dataset2
widget:
- text: "some French text"
----------------------------