Pushing Model through CLI

Hello everyone, I am using this code below to push my local model to the hub:

from transformers import AutoModel, AutoTokenizer

def push_model_and_tokenizer_to_hub(local_model_directory, hub_model_id, api_token):
    model = AutoModel.from_pretrained(local_model_directory)
    tokenizer = AutoTokenizer.from_pretrained(local_model_directory)
    
    # Push tokenizer to Hub
    tokenizer.push_to_hub(
        repo_id=hub_model_id,
        use_auth_token=api_token,
    )
    
    # Push model to Hub
    model.push_to_hub(
        repo_id=hub_model_id,
        use_auth_token=api_token,
    )

local_model_directory = r"C:\Users\raksh\Desktop\LANGTEST\langtest\roberta"
hub_model_id = "test1"
api_token = "hf_NsWAxxxxxxxxxxxxxxxxxxxx"

push_model_and_tokenizer_to_hub(local_model_directory, hub_model_id, api_token)

The model files are pushing : Link → Rakshit122/test1 · Hugging Face

But if i make a sample prediction on Hosted inference API, It is showing me embedding matrix, not the actual binary prediction

Can anyone tell what i am doing wrong here?