Emotion Model: Additional inference parameter not processed in Sagemaker

I am using the j-hartmann/emotion-english-distilroberta-base via AWS Sagemaker.

The model card specifies an additional parameter return_all_scores which should return the scores for all seven emotions. This works out of the box with the huggingface transformer library, the Inference API but it does not work when hosted on AWS Sagemaker.

Do you know what might be the problem?

# deploy the serverless endpoint
predictor = huggingface_model.deploy(
    serverless_inference_config=serverless_config
)

data = {
"inputs": "i am happy today",
"parameters": {"return_all_scores": True}
}

# request
result = predictor.predict(data)

@RomanEngeler1805 which transformers_version and pytorch_version did you use to create the endpoint? could you try the latest if you haven’t? You can find them here: Reference

Hi @philschmid. I am using
transformers_version=“4.6.1”
pytorch_version=“1.7.1”
which are the latest according to the reference.

Here the screenshot of the complete script that I am using:

Could you please update to transformers_version = 4.17 and PyTorch 1.10 and re-try? You can find all available DLCS here: Reference

Yes that worked @philschmid Thanks :pray: