Azure ML Datastore

Hello. My dataset is in Azure Blob Storage as a parquet. I am able to create a dataset(azureml) using Tablular.from_parquet_files. My question is how do I convert this to Huggingface dataset?
from datasets import load_dataset
dataset = load_dataset(“parquet”, data_files={‘train’: ‘train.parquet’, ‘test’: ‘test.parquet’})

Any general recommendation on how to create a Pytorch Dataloader for large parquet files in Azure?
Thanks