Missing pretraining datasets for T5 models

Not sure who to contact to address this issue, but it seems like the model cards for the original T5 models in the hub are incorrect. The model cards only list C4 as the pretraining dataset, but the versions of T5 that have been uploaded seem to also have been pretrained on other tasks such as the GLUE datasets, as shown by running the following code snippet:

from transformers import T5Tokenizer, T5ForConditionalGeneration

tokenizer = T5Tokenizer.from_pretrained('t5-base')
model = T5ForConditionalGeneration.from_pretrained('t5-base')

text = ['mnli hypothesis: The movie was terrible. premise: The film was bad.',
        'mrpc sentence1: The movie was terrible. sentence2: The film was bad.',
        'sst2 sentence: The movie was terrible.']

tokenized = tokenizer(text, padding=True, return_tensors='pt')
generations = model.generate(**tokenized)
print(tokenizer.batch_decode(generations, skip_special_tokens=True))
# prints: ['entailment', 'equivalent', 'negative']

This seems fine based on the multitask training described in the original paper, but isn’t reflected in the model cards. If anyone is able to fix the model cards to include all datasets that were used during training (or knows who to contact to do so), that would be great!