Always output generation config in terminal

Greetings,

I am using T5-small. For some reason, whenever I call the model.generate() method, a config is always generated in the terminal, which gives me a really hard time reading my log.

Generate config GenerationConfig {
  "_from_model_config": true,
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.29.0"
}

Generate config GenerationConfig {
  "_from_model_config": true,
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.29.0"
}

Generate config GenerationConfig {
  "_from_model_config": true,
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.29.0"
}

Generate config GenerationConfig {
  "_from_model_config": true,
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.29.0"
}

This is my code for generation, which is pretty simple

    def generate(self,diffusion_emb, attention_mask=None, labels=None):

        with torch.no_grad():
            input_embeddings = self.project_from_diffusion(diffusion_emb)
            
            #print(input_embeddings.shape)
            outputs = self.t5.generate(
                    inputs_embeds=input_embeddings,
                    attention_mask=attention_mask,
                    eos_token_id=self.tokenizer.eos_token_id,
                    max_length=self.max_length
                    )
        return outputs

Why this is happening? How do I suppress such config?

Thank you very much!

It seems like this problem is not just happening on myself. Other people has same problem as well, I doubt if this is something related to T5 or pytorch lightning…