Finetuning T5 large for paraphrasing multiple time with the same parameters and data gives different results

I use this script to fine-tune T5 large for paraphrasing (Google Colab).

And i do fine-tuning 2 times from scratch, using the same data and the same parameters and expected that the results are the same. However the results are quite different.

Is there any way to set for example seed, so I can get the same result if everything else is the same.

I am not sure if this means that the results should be the same?

def set_seed(seed):
    random.seed(seed)
    np.random.seed(seed)
 torch.manual_seed(seed)

set_seed(42)

Hi @zokica , are you using colab pro for finetuninig t5-large model?, In my case I am getting RAM issues.
Is there a way to finetune it on free colab instance?

No I use cheap GPU provider such as vast.ai or runpod. It is pretty cheap.

On google colab if you get a 12GB GPU it is not possible. I think for T5-base you need 12 GB of vram.

For T5 large i use RTX 3090 it is pretty fast.