Use Trainer API with two valiation sets

Hi everyone,
right now the Trainer API accepts one eval_dataset. I am wondering, is it somehow possible to provide two different validation sets that are both evaluated during training? For example, I might want to track my validation loss on a validation set that was previously sampled from my training data and hence shares the same distribution and on a validation set that was sampled from a data set with a presumably different data distribution (e.g., stemming from a different period). The idea stems from “Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks”.

Thanks in advance :slight_smile:
Simon