Cannot reproduce the finetune process for text summarization (tensorflow)

According to the tutorial, which is talking about how to finetune text summarization model.

I am try to run the code in my environment (linux, jupyter server), but with no luck.

I always got empty string in my summarization. Here is my script

I think the root cause might be the format of my model. When I trying to use my model, the warning messages shown:

/anaconda/envs/azureml_py38/lib/python3.8/site-packages/keras/initializers/ UserWarning: The initializer RandomNormal is unseeded and being called multiple times, which will return identical values each time (even if the initializer is unseeded). Please update your code to provide a seed to the initializer, or avoid using the same initalizer instance more than once.
All model checkpoint layers were used when initializing TFMT5ForConditionalGeneration.
All the layers of TFMT5ForConditionalGeneration were initialized from the model checkpoint at HanSSH/mt5-small-finetuned-amazon-en-es.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFMT5ForConditionalGeneration for predictions without further training.

However, these messages did not show up when I use “huggingface-course/mt5-small-finetuned-amazon-en-es” model.

Any suggestions?