GPT-J fails on Amazon Sagemaker

I am trying to deploy a GPT-J instance on sagemaker.

This is my Jupyter notebook sample

from sagemaker.huggingface import HuggingFaceModel
import sagemaker

# IAM role with permissions to create endpoint
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
	'HF_MODEL_ID':'EleutherAI/gpt-j-6B',
	'HF_TASK':'text-generation'
}


# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
	transformers_version='4.17.0',
	pytorch_version='1.10.2',
	py_version='py38',
	env=hub,
	role=role, 
)


# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
	initial_instance_count=1, # number of instances
	instance_type='ml.m5.4xlarge' #'ml.m5.xlarge' # ec2 instance type
)

What I basically changed from the model suggestion is the instance name.
When calling the endpoing I keep getting errors which I assume are due to latency or memory

Example of error:

ReadTimeoutError: Read timeout on endpoint URL: "https://runtime.sagemaker.eu-central-1.amazonaws.com/endpoints/huggingface-pytorch-inference-xxxx-xxx-xxx-xx-xx-xx/invocations"

This is on the latest image which I’ve been switching

Anyone can point out something I’m doing wrong?

I would like to point out that I’m just starting with using ML models so I don’t have a lot of background knowledge

Thanks

Hey :wave:

You might find solutions in this blog article Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker

1 Like

Thanks a lot!
It kinda works. I still get a lot of out of memory errors, but I’ll mark it as the solution for now!