Error hosting endpoint when deploying model

I’m trying to deploy the meta-llama/Llama-2-70b-hf using sagemaker, but I’m getting an error I don’t understand.

I created a jupyter notebook on sagemaker, ml.g5.2xlarge 256 Gb, and later one with 1024 GB.

Copied the deploy to sagemaker script from huggin face and replaced my token:

import json
import sagemaker
import boto3
from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri

try:
	role = sagemaker.get_execution_role()
except ValueError:
	iam = boto3.client('iam')
	role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']

# Hub Model configuration. https://huggingface.co/models
hub = {
	'HF_MODEL_ID':'meta-llama/Llama-2-70b-hf',
	'SM_NUM_GPUS': json.dumps(1),
	'HUGGING_FACE_HUB_TOKEN': '<REPLACE WITH YOUR TOKEN>'
}

assert hub['HUGGING_FACE_HUB_TOKEN'] != '<REPLACE WITH YOUR TOKEN>', "You have to provide a token."

# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
	image_uri=get_huggingface_llm_image_uri("huggingface",version="0.9.3"),
	env=hub,
	role=role, 
)

# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
	initial_instance_count=1,
	instance_type="ml.g5.2xlarge",
	container_startup_health_check_timeout=300,
  )
  
# send request
predictor.predict({
	"inputs": "My name is Julien and I like to",
})

I get the following error:
UnexpectedStatusException: Error hosting endpoint huggingface-pytorch-tgi-inference-2023-09-06-16-46-01-586: Failed. Reason: The primary container for production variant AllTraffic did not pass the ping health check. Please check CloudWatch logs for this endpoint..

the logs for the endpoints just show this:

EDIT: for clarity the last 2 log entry read:

#033[2m2023-09-06T16:59:07.045706Z#033[0m #033[32m INFO#033[0m #033[2mtext_generation_launcher#033[0m#033[2m:#033[0m Download: [9/15] -- ETA: 0:05:43.333332
#033[2m2023-09-06T16:59:07.045945Z#033[0m #033[32m INFO#033[0m #033[2mtext_generation_launcher#033[0m#033[2m:#033[0m Download file: model-00010-of-00015.safetensors

It seems to be just stopping, and I don’t see a reason why.
Any help is greatly appreciated.

You cannot deploy 70B on g5.2xlarge instance, see: Deploy Llama 2 7B/13B/70B on Amazon SageMaker