How is transfomers_verion and pytorch_version determined

When submitting to Sagemaker, the hub gives us some sample snippet:

huggingface_model = HuggingFaceModel(
	transformers_version='4.17.0',
	pytorch_version='1.10.2',
	py_version='py38',
	env=hub,
	role=role, 
)

But my model was built with transfomers 4.23.1 and pytorch 1.12.1.

When I used the parameters given by the hub, it can run successfully on certain instance type, however, for ml.inf1.xlarge, it complains that it requires pytorch 1.9.1. Can we specify these versions arbitrarily? Is there any guidance?

There are different Deep Learning Containers available with different combinations of PyTorch and Transformers, see here.

That being said, if you require a specific version for torch and/or transformers and it’s not available in that list you can provide a requirements.txt file and specify a specific version number. See also this post, hope that helps.

1 Like