[SOLVED] Error of input when requesting batch-transform job of zero-shot-text-classification on SageMaker

Hi everyone, I’m working on using BERT model to classify food purchase data following this guide Zero-shot text classification with Amazon SageMaker JumpStart | AWS Machine Learning Blog. Yesterday, I successfully using SageMaker Jumpstart to deploy the model and got response from the endpoint. However, couldn’t get the Batch Transform part to work following the sample codes. I managed to get passed the initial errors and initiated batch inference job on SageMaker. But the job kept failing due to error like this

2024-03-20T01:35:42,499 [INFO ] W-facebook__bart-large-mnli-7-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -   File "/opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1048, in run_single
2024-03-20T01:35:42,499 [INFO ] W-facebook__bart-large-mnli-7-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -     for model_inputs in self.preprocess(inputs, **preprocess_params):
2024-03-20T01:35:42,500 [INFO ] W-facebook__bart-large-mnli-7-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -   File "/opt/conda/lib/python3.8/site-packages/transformers/pipelines/zero_shot_classification.py", line 185, in preprocess
2024-03-20T01:35:42,500 [INFO ] W-facebook__bart-large-mnli-7-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -     sequence_pairs, sequences = self._args_parser(inputs, candidate_labels, hypothesis_template)
2024-03-20T01:35:42,500 [INFO ] W-facebook__bart-large-mnli-7-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -   File "/opt/conda/lib/python3.8/site-packages/transformers/pipelines/zero_shot_classification.py", line 26, in __call__
2024-03-20T01:35:42,500 [INFO ] W-facebook__bart-large-mnli-7-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -     if len(labels) == 0 or len(sequences) == 0:
2024-03-20T01:35:42,500 [INFO ] W-facebook__bart-large-mnli-7-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - TypeError: object of type 'NoneType' has no len()

The input file I prepared is in jsonl, and look like this

{"sequences": "snack foods cookies cookie oatmeal raisin 1 12 ct 12 pk cookies oatraisin indiv wrap-bx", "candidate_labels": ["Eggs", "Chicken", "Oils", "Potatoes", "Beans", "Vegetables", "Spices", "Milk", "Grains", "Nuts", "Coffee and tea", "Pork", "Fruits", "Cheese", "Sugars", "Fish", "Liquids", "Beef"], "multi_class": false}
{"sequences": "vegetables fresh peppers fresh pepper bell red 1 25 lb pepper red bu 1-1 9", "candidate_labels": ["Eggs", "Chicken", "Oils", "Potatoes", "Beans", "Vegetables", "Spices", "Milk", "Grains", "Nuts", "Coffee and tea", "Pork", "Fruits", "Cheese", "Sugars", "Fish", "Liquids", "Beef"], "multi_class": false}
{"sequences": "fruit fresh lemons fresh lemon 1 2 lb 86 lemon 2 lb", "candidate_labels": ["Eggs", "Chicken", "Oils", "Potatoes", "Beans", "Vegetables", "Spices", "Milk", "Grains", "Nuts", "Coffee and tea", "Pork", "Fruits", "Cheese", "Sugars", "Fish", "Liquids", "Beef"], "multi_class": false}

My job definition look like this

huggingface_model_zero_shot = HuggingFaceModel(
#model_data=model_uri, # path to your trained sagemaker model  
env=hub, # configuration for loading model from Hub
role=aws_role, # IAM role with permissions to create an endpoint
transformers_version="4.17", # Transformers version used
pytorch_version="1.10", # PyTorch version used
py_version='py38', # Python version used
)

# Create transformer to run a batch job
batch_job = huggingface_model_zero_shot.transformer(
instance_count=1,
instance_type='ml.m5.xlarge',
strategy='SingleRecord',
assemble_with='Line',
output_path=s3_path_join("s3://",sagemaker_bucket, "zstc-results"), # we are using the same s3 path to save the output with the input
)

batch_job.transform(
data=data_upload_path,
content_type='application/json',
split_type='Line',
logs=True,
wait=True
)

Could someone please point me to the right direction? Thank you very much!

The error is due to the format of the input json string.

In the Endpoint using the same model deployed by SageMaker Jumpstart the input payload should look like this:
{"sequence":"snack foods cookies ...","labels":["Sugars","Spices","Fish","Oils"...,"]}

But for the model directly pulled from HuggingFace Hub, setup for batch transform. The input should look like this (following HF docs)
{"inputs": "snack foods cookies ...", "parameters": {"candidate_labels": ["Eggs", "Chicken", ..."]}}

After reformatting the input, the batch inference job went through.