Why there are other results with the same seed for Transformers?

I had:
from transformers import TrainingArguments, Trainer

training_args = TrainingArguments(output_dir=“testTrainer”, evaluation_strategy=“epoch”, seed=42, num_train_epochs=10.0, per_device_eval_batch_size=16, weight_decay=0.01, fp16=True, run_name=‘Set1’)

run twice and I obtained different results on the BERT model dumitrescustefan/bert-base-romanian-cased-v1.