T5 for conditional generation: getting started

You should add max_length=None to your model.generate() call, I think. If that doesn’t work, try max_length=500 or something and see if generations are longer. I think you should also set min_length=None.

The reason is that T5forConditionaGeneration I think loads a config file at some point that specifies these parameters. You can see default value at transformers/generation_utils.py at master · huggingface/transformers · GitHub

So if you want to see what the model is being loaded with when we do .frompretrained(), call print(model.config). I think we’ll see that the default is max_length=20, which would be causing your problem. Set both max length and min length to None, and then the model will stop only when EOS token is the most probable output.

edit:
I think you could also directly modify some of these config parameters at load, e.g. by model.config.max_length = new_value, rather than doing it at the generation call.

2 Likes