How to train model without shuffling data with multi-GPU

Hi, my project requires pretraining a large model from scratch using fixed-order data, and my code is modified from the official script “run_mlm.py”.
Is it possible to train the model without shuffling data with multi-GPU?