Reproducible Results?

I have a general question about reproducible results in Huggingface. My experiments usually work quite well, yielding the same results across multiple runs. However, I notice that when I am extending a model class, the results are no longer the same even though the setting should be. For example, say I am extending some Huggingface model like this

class MyModel(HuggingfaceModel):
def __init__(self, *args):
super().__init__(*args)

# Want to have some extra operation here
if torch.rand() < 0:
#do something

For context, the number “0” might be a hyperparameter that I want to vary. Even when I evaluate torch.rand() < 0, which shouldn’t evaluate true anyways and therefore should not change the code compared to the original model, I notice that my training results still look different. Is this expected, or am I seeding something incorrectly here (note that I am not explicitly doing any seeding so I am leaving everything to be done in the HuggingfaceModel)?