Why eval_dataset is set with test dataset in training_args

Hello,
I am seeing some Github repositories that set eval_dataset with test_dataset. I was thinking that it might cause overfitting. Can you please tell me if there is a specific reason, or it is just a bug?

trainer = Trainer(
    model,
    args,
    train_dataset=tokenized_datasets_train,
    eval_dataset=tokenized_datasets_test,
    tokenizer=tokenizer,
    compute_metrics=compute_metrics,
    data_collator = data_collator_
)