Deploying custom inference script with llama2 finetuned model

You cannot use the llm container with a custom inference script.