Client_error_from_model

I tried to deploy a Zero-Shot Model (bart-large-mnli) and used the exact code recommended in the model page for AWS Deployments.

from sagemaker.huggingface import HuggingFaceModel
import sagemaker

role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
	'HF_MODEL_ID':'facebook/bart-large-mnli',
	'HF_TASK':'zero-shot-classification'
}

# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
	transformers_version='4.17.0',
	pytorch_version='1.10.2',
	py_version='py38',
	env=hub,
	role=role, 
)

# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
	initial_instance_count=1, # number of instances
	instance_type='ml.m5.xlarge' # ec2 instance type
)

predictor.predict({
	'inputs': "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!"
})

It returned an error. Was wondering if I could get some tips on this. Thanks.

Hello @ujjirox,

Thank you for opening this thread! The inputs are not correct on the UI. You can find the right format here: Supported Transformers & Diffusers Tasks

{
  "inputs": "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!",
  "parameters": {
    "candidate_labels": ["refund", "legal", "faq"]
  }
}