Available dataset to train run_translations.py examples

When trying to train a sagemaker, there’s this example from https://huggingface.co/google/mt5-small:

import sagemaker
from sagemaker.huggingface import HuggingFace

# gets role for executing training job
role = sagemaker.get_execution_role()
hyperparameters = {
	'model_name_or_path':'google/mt5-small',
	'output_dir':'/opt/ml/model'
	# add your remaining hyperparameters
	# more info here https://github.com/huggingface/transformers/tree/v4.17.0/examples/pytorch/seq2seq
}

# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.17.0'}

# creates Hugging Face estimator
huggingface_estimator = HuggingFace(
	entry_point='run_translation.py',
	source_dir='./examples/pytorch/seq2seq',
	instance_type='ml.p3.2xlarge',
	instance_count=1,
	role=role,
	git_config=git_config,
	transformers_version='4.17.0',
	pytorch_version='1.10.2',
	py_version='py38',
	hyperparameters = hyperparameters
)

# starting the train job
huggingface_estimator.fit()

From the code, transformers/run_translation.py at v4.17.0 · huggingface/transformers · GitHub, there is a requirement to give (i) a dataset_name or (ii) train_file and validation_file.

Are there examples of available Huggingface datasets that we can use to train the model?

Is there an example of the jsonline file format for the train_file and validation_file arguments?

Hi! You can find the required JSON structure here, and for the list of available translation datasets, visit this page.

1 Like