HuggingFace - Why does the T5 model shorten sentences?

I wanted to train the model for spell correction. I trained two models allegro/plt5-base with polish sentences and google/t5-v1_1-base with english sentences. Unfortunately, I don’t know for what reason, but both models shorten the sentences. Example:

phrases = ['The name of the man who was kild was Jack Robbinson he has black hair brown eyes blue Jacket and blue Jeans.']
encoded = tokenizer(phrases, return_tensors="pt", padding=True, max_length=512, truncation=True)
print(encoded)
# {'input_ids': tensor([[   37,   564,    13,     8,   388,   113,    47,     3,   157,   173,
#             26,    47,  4496,  5376,  4517,   739,     3,    88,    65,  1001,
#           1268,  4216,  2053,  1692, 24412,    11,  1692,  3966,     7,     5,
#              1]], device='cuda:0'), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
#          1, 1, 1, 1, 1, 1, 1]], device='cuda:0')}

encoded.to('cuda')
translated = model.generate(**encoded)
print(translated)
# tensor([[   0,   37,  564,   13,    8,  388,  113,   47, 2170,   47, 4496, 5376,
#          4517,  739,    3,   88,   65, 1001, 1268, 4216]], device='cuda:0')

tokenizer.batch_decode(translated, skip_special_tokens=True)
#['The name of the man who was born was Jack Robbinson he has black hair brown']

And something like this happens in almost every longer sentence. I tried to check if the model has any maximum sentence length set based on the documentation: T5 — transformers 3.1.0 documentation. But the config of this model has no such field:
n_positions – The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). n_positions can also be accessed via the property max_position_embeddings.
This is the entire config of the model:

T5Config {
  "_name_or_path": "final_model_t5_800_000",
  "architectures": [
    "T5ForConditionalGeneration"
  ],
  "d_ff": 2048,
  "d_kv": 64,
  "d_model": 768,
  "decoder_start_token_id": 0,
  "dropout_rate": 0.1,
  "eos_token_id": 1,
  "feed_forward_proj": "gated-gelu",
  "initializer_factor": 1.0,
  "is_encoder_decoder": true,
  "layer_norm_epsilon": 1e-06,
  "model_type": "t5",
  "num_decoder_layers": 12,
  "num_heads": 12,
  "num_layers": 12,
  "output_past": true,
  "pad_token_id": 0,
  "relative_attention_max_distance": 128,
  "relative_attention_num_buckets": 32,
  "tie_word_embeddings": false,
  "torch_dtype": "float32",
  "transformers_version": "4.18.0",
  "use_cache": true,
  "vocab_size": 32128
}

What can be done to make the model return whole sentences?

Update

I looked in the old documentation earlier. But in the new one I don’t see a field in the config at all about the maximum sentence length. new documentation

I’m sorry if I’m misunderstanding your question,

I think maybe it has to do with tokenizer settings. With truncation=True and padding=True, it will pad (lengthen) shorter sentences and truncate (shorten) longer sentences so they are all equal length. Your example also has the tokenizer set to max_length=512 . This could shorten your sentences too.

I think if you are using code like tokenizer = T5Tokenizer.from_pretrained(“t5-small”) it should use the settings used for the tokenizer during training.

I hope that is maybe helpful. :hugs: