Facebook/bart-large-mnli inference when deployed on SageMaker

Hi,

Any idea/documentation on how to compose json payload for inferencing facebook/bart-large-mnli model, deployed on SageMaker, used the code as provided on the "Deploy → Amazon SageMaker → AWS:

from sagemaker.huggingface import HuggingFaceModel
import sagemaker

role = sagemaker.get_execution_role()

Hub Model configuration. Models - Hugging Face

hub = {
‘HF_MODEL_ID’:‘facebook/bart-large-mnli’,
‘HF_TASK’:‘zero-shot-classification’
}

create Hugging Face Model Class

huggingface_model = HuggingFaceModel(
transformers_version=‘4.17.0’,
pytorch_version=‘1.10.2’,
py_version=‘py38’,
env=hub,
role=role,
)

deploy model to SageMaker Inference

predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type=‘ml.m5.xlarge’ # ec2 instance type
)

Hello @striki-ai,

This documentation link should help you: Reference

example

predictor.predict({
  "inputs": "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!",
  "parameters": {
    "candidate_labels": ["refund", "legal", "faq"]
  }
})