Payload format for LeoLM/leo-mistral-hessianai-7b-chat Sagemaker Endpoint

Hello together,

I just deployed the LeoLM model as an sagemaker endpoint via the code snippets provided on the model page:

import json
import sagemaker
import boto3
from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri

try:
	role = sagemaker.get_execution_role()
except ValueError:
	iam = boto3.client('iam')
	role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']

# Hub Model configuration. https://huggingface.co/models
hub = {
	'HF_MODEL_ID':'LeoLM/leo-mistral-hessianai-7b-chat',
	'SM_NUM_GPUS': json.dumps(1)
}



# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
	image_uri=get_huggingface_llm_image_uri("huggingface",version="1.1.0"),
	env=hub,
	role=role, 
)

# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
	initial_instance_count=1,
	instance_type="ml.g5.2xlarge",
	container_startup_health_check_timeout=300,
  )
  
# send request
predictor.predict({
	"inputs": "My name is Julien and I like to",
})

When I make a prediction the response is super underwhelming which is because of the fact (i think) that I didn’t pass any parameters or system prompts to the request. With Llama2 I can easily do it like:

prompt = "Tell me about Amazon SageMaker."

payload = {
    "inputs": prompt,
    "parameters": {
        "do_sample": True,
        "top_p": 0.9,
        "temperature": 0.8,
        "max_new_tokens": 1024,
        "stop": ["<|endoftext|>", "</s>"]
    }
}
response = predictor.predict(payload)

I havent found any specifications or hints how I shall structure my request to achieve this. Has anyone any idea?

Would be super helpful :slight_smile:

You can do the same for the mistral model

Hey thanks for getting back to that topic. Do you mean hosting this model: TheBloke/em_german_leo_mistral-GGUF instead of LeoLM?