InternalServerException when running a model loaded on S3

Hi there,

I am trying to deploy a model loaded on S3, following the steps found mainly on this video: [Deploy a Hugging Face Transformers Model from S3 to Amazon SageMaker](https://www.youtube.com/watch?v=pfBGgSGnYLs).

For that I have downloaded a model into a S3 bucket and use this image URI for DLC: image_uri = “763104351884.dkr.ecr.eu-west-1.amazonaws.com/huggingface-pytorch-inference:1.7.1-transformers4.6.1-cpu-py36-ubuntu18.04

When I run the predictor.predict(data) command, I get this error:

The model I use fot these tests is this one: dccuchile/bert-base-spanish-wwm-uncased, and I could not find the way for letting the model know which action should perform.

I am pretty new with HuggingFace technology, and probably I am missing the point for fixing that.

Please, could you let me know what should I do for informing the model about what to do?

Thank you!