Error in batch transform job with Huggingface model and SageMaker

Hi,

I am running a batch transform job with a model from the Huggingface hub (ProsusAI/finbert) in SageMaker, with a simple change in the configuration of the transformer to join the ID of the input with the inference results in the output, and I’m getting a Prediction Exception and the job fails. The error displayed is the following:

mms.service.PredictionException: 'str' object has no attribute 'pop' : 400

Has anyone tried to run the batch transform job with SageMaker joining the prediction results with an identifier in the input file and succeeded doing so? Or does anyone know how to handle this error?

The configuration I am using is similar to the one in this link: notebooks/sagemaker-notebook.ipynb at master · huggingface/notebooks · GitHub, the only modification I am doing, is setting up other parameters to the transformer and the transform to match this Associate Prediction Results with Input Records - Amazon SageMaker.

The input file looks like this:

{"id":"item#1419453569267240963","inputs":"RT LCID is sexier than TSLA It s a fact"}
{"id":"item#1419453569334341640","inputs":"If you were given 1 million and had to invest it all into a single asset what would it be Gold Silver BTC XRP"}
{"id":"item#1419453570710114308","inputs":"RT FINANCE To celebrate the launch of KingsMenCoin Solana s memecoin we are organizing an AIRDROP To be eligible Like RT"}
{"id":"item#1419453577395937283","inputs":"RT Big movers this week with catalyst 1 bzwr updates per and 2 fern big news 3 kync la"}

And my code is this one:

from sagemaker.huggingface.model import HuggingFaceModel

output_s3_path = s3_path_join("s3://",sagemaker_session_bucket,"batch_transform/output")

hub = {
    'HF_MODEL_ID':'ProsusAI/finbert',
    'HF_TASK':'text-classification'
}

huggingface_model = HuggingFaceModel(
    transformers_version='4.6',
    pytorch_version='1.7',
    py_version='py36',
    env=hub,
    role=role, 
)

batch_job = huggingface_model.transformer(
    instance_count=1,
    instance_type='ml.p3.2xlarge',
    strategy='SingleRecord',
    output_path=output_s3_path,
    accept='application/json',
    assemble_with='Line',
)

batch_job.transform(
    data='s3://sagemaker-us-east-1-822164694494/batch_transform/input/preprocessed_item_test.jsonl',
    content_type='application/json',    
    split_type='Line',
    input_filter="$.inputs",
    join_source= "Input",
    output_filter="$['id','SageMakerOutput']"
)

And then this is the error in the logs:

2021-11-26 17:59:33,795 [INFO ] W-9000-ProsusAI__finbert com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 4877
2021-11-26 17:59:33,797 [WARN ] W-9000-ProsusAI__finbert com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-ProsusAI__finbert-1
2021-11-26 17:59:33,800 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Prediction error
2021-11-26 17:59:33,800 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-11-26 17:59:33,800 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -   File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 222, in handle
2021-11-26 17:59:33,800 [INFO ] W-9000-ProsusAI__finbert com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1
2021-11-26 17:59:33,800 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -     response = self.transform_fn(self.model, input_data, content_type, accept)
2021-11-26 17:59:33,800 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -   File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 181, in transform_fn
2021-11-26 17:59:33,800 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -     predictions = self.predict(processed_data, model)
2021-11-26 17:59:33,801 [INFO ] W-9000-ProsusAI__finbert ACCESS_LOG - /169.254.255.130:35648 "POST /invocations HTTP/1.1" 400 4790
2021-11-26 17:59:33,801 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -   File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 142, in predict
2021-11-26 17:59:33,801 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -     inputs = data.pop("inputs", data)
2021-11-26 17:59:33,801 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - AttributeError: 'str' object has no attribute 'pop'
2021-11-26 17:59:33,801 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - 
2021-11-26 17:59:33,801 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - During handling of the above exception, another exception occurred:
2021-11-26 17:59:33,801 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - 
2021-11-26 17:59:33,802 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-11-26 17:59:33,802 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -   File "/opt/conda/lib/python3.6/site-packages/mms/service.py", line 108, in predict
2021-11-26 17:59:33,802 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -     ret = self._entry_point(input_batch, self.context)
2021-11-26 17:59:33,803 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -   File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 231, in handle
2021-11-26 17:59:33,803 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -     raise PredictionException(str(e), 400)
2021-11-26 17:59:33,803 [INFO ] W-ProsusAI__finbert-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mms.service.PredictionException: 'str' object has no attribute 'pop' : 400

As always, any help is much appreciated!!

2 Likes

Was there ever a resolution to this? I am having a similar issue.

Am interested in this as well, seeing a similar error while using AWS clarify and HF model

Also experiencing this still.

I think I ran into this with empty strings – are any of your rows missing data in inputs? Otherwise this set up works for me (using Bert).

Ultimately I believe this is where the error is coming from.

1 Like

In case anyone is still struggling with this, here is what I think is happening (which may or may not be true, but it’s working for me now):
When using the input filter, you are using a JSON path filter. If you have input of the form [{“inputs”: “some string”, “id”: 12345},…], then using $.inputs as an input filter would result in an output (and input to the model) of just the strings stored under the “inputs” key, so [“some string”, “another string”,…]. This causes issues with the following line of code in the sagemaker-huggingface-inference-toolkit/handler_service.py script: inputs = data.pop(“inputs”, data) (line 167; not sure how to link to it directly). The pop method won’t work on just a list of strings, so it throws an error at this point. I realized that I don’t even need to apply an input filter, since the text input will be ‘popped’ anyways (as long as it is stored under “inputs” in the input data). So I just leave out the input filter, and instead just associate the entire output with the original (unpopped input) - like so:

batch_job.transform(
data=input_s3_uri,
data_type=“S3Prefix”,
content_type=“application/json”,
split_type=“Line”,
join_source=“Input”,
output_filter=“$[‘id’,‘inputs’,‘SageMakerOutput’]”
)

The labels and score are contained in ‘SageMakerOutput’. I then use a custom post-processing function that reads in the jsonl data and transforms it into a pandas dataframe, which I then write to S3.
Like I said this is just what worked for me, but I thought I’d share it because this issue was giving me headaches for quite a while and there doesn’t seem to be clear guidance/documentation on it.