GPT-J fails on Amazon Sagemaker

I am trying to deploy a GPT-J instance on sagemaker.

This is my Jupyter notebook sample

from sagemaker.huggingface import HuggingFaceModel
import sagemaker

# IAM role with permissions to create endpoint
role = sagemaker.get_execution_role()
# Hub Model configuration.
hub = {

# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(

# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
	initial_instance_count=1, # number of instances
	instance_type='ml.m5.4xlarge' #'ml.m5.xlarge' # ec2 instance type

What I basically changed from the model suggestion is the instance name.
When calling the endpoing I keep getting errors which I assume are due to latency or memory

Example of error:

ReadTimeoutError: Read timeout on endpoint URL: ""

This is on the latest image which I’ve been switching

Anyone can point out something I’m doing wrong?

I would like to point out that I’m just starting with using ML models so I don’t have a lot of background knowledge


Hey :wave:

You might find solutions in this blog article Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker

1 Like

Thanks a lot!
It kinda works. I still get a lot of out of memory errors, but I’ll mark it as the solution for now!