How do I set the batch size when trying to FineTune a TFT5ForConditionalGeneration model?
Currently, I’ve run my input sequences and labels through the T5Tokenizer and am calling the model with…
output = model(input_ids=input_ids, attention_mask=attention_mask, labels=target_ids)
where model = TFT5ForConditionalGeneration.from_pretrained(“google/t5-v1_1-base”)