How to load a large hf dataset efficiently?

Thank you for the reply, following your suggestion is there any way in hf dataset, split into multiple shards to be loaded in parallel?