When deployed meta-llama/Llama-2-7b-chat-hf on sagemaker, it resulted in complete hallunciations

I managed to deploy meta-llama/Llama-2-7b-chat-hf on sagemaker using the script in model’s page. When i tried to infer from the deployed model, the output was complete hallucination; just junk letters and words.
I got one error while running this line of the script:
assert hub[‘HUGGING_FACE_HUB_TOKEN’] != ‘’, “You have to provide a token.”
as it kept on resulting the error message. so i skipped this line and resumed and it deployed it. but the result is the way i told you.
Thanks for the help