I am trying to deploy a custom data fine tune llam2 model over amazon sagemaker .
However the model compression is taking a lot more time , Just want to know is it possible to use an uncompressed model dir .
huggingface_model = HuggingFaceModel(
image_uri=get_huggingface_llm_image_uri("huggingface",version="0.8.2"),
model_data="s3_path",
role=role,
transformers_version='4.28',
pytorch_version='2.0',
py_version='py310',
env=hub
)
can we use uncompress model dir for “s3_path” ?