Padding should be True, please explain

In the following code, why are we not making padding = True inside the tokenizer, because don’t we want all the lists to be of the same size?
max_length = 128

def preprocess_function(examples):
inputs = [ex[“en”] for ex in examples[“translation”]]
targets = [ex[“fr”] for ex in examples[“translation”]]
model_inputs = tokenizer(
inputs, text_target=targets, max_length=max_length, truncation=True
)
return model_inputs

I just read up, padding is the job of DataCollator which pads -100 so that our loss function ignores it.
Is this correct?