Error loading finetuned llama2 model while running inference

I get the same error here for CodeLlama-7b-hf-Instruct.

I even included a requirements.txt file in model.tar.gz requiring a transformers==4.33.2 but it doesnt work.
Any ideas?