Getting error in the inference stage of Transformers Model (Hugging Face)

Greetings Everyone!

I have finetuned the model in the custom dataset and then trying to deploy it using amazon SageMaker using this code

from sagemaker.huggingface import HuggingFaceModel
import sagemaker

from sagemaker.huggingface import HuggingFaceModel
import sagemaker 

# role = sagemaker.get_execution_role()

# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
   model_data="s3://stnewssentiment/model.tar.gz",  # path to your trained sagemaker model
   role=role, # iam role with permissions to create an Endpoint
   transformers_version="4.17", # transformers version used
   pytorch_version="1.10", # pytorch version used
   py_version="py38", # python version of the DLC
   env={ 'HF_TASK':'text-classification' },

predictor = huggingface_model.deploy(

But when i try to predict using “predictor.predict(data)” then it gives me the error given below.

ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
“code”: 400,
“type”: “InternalServerException”,
“message”: “Can\u0027t load config for \u0027/.sagemaker/mms/models/model\u0027. If you were trying to load it from \u0027\u0027, make sure you don\u0027t have a local directory with the same name. Otherwise, make sure \u0027/.sagemaker/mms/models/model\u0027 is the correct path to a directory containing a config.json file”
". See in account 430206693130 for more information.

I have tried to apply the things which already this community has discussed but in vain.