Increase Output length of Falcon 7b instruct

Hello everyone,

I have made a deployment of Falcon 7B instruct using the AWS sagemaker SDK code in the model’s page.

import json
import sagemaker
import boto3
from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri

try:
	role = sagemaker.get_execution_role()
except ValueError:
	iam = boto3.client('iam')
	role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']

# Hub Model configuration. https://huggingface.co/models
hub = {
	'HF_MODEL_ID':'tiiuae/falcon-7b-instruct',
	'SM_NUM_GPUS': json.dumps(1)
}

# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
	image_uri=get_huggingface_llm_image_uri("huggingface",version="0.8.2"),
	env=hub,
	role=role, 
)

# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
	initial_instance_count=1,
	instance_type="ml.g5.2xlarge",
	container_startup_health_check_timeout=300,
  )
  
# send request
predictor.predict({
	"inputs": "Hey Falcon! Any recommendations for my holidays in Abu Dhabi?",
})

It is working just fine but the outputs are too short (just a couple of sentences). Where should the changes be made so that it managed to produce at least one paragraph ? Thank you in advance.

cc @philschmid

I had made some changes on the deployment section but afterwards I made another deployment changing the configuration on the hub

# Hub Model configuration. https://huggingface.co/models
hub = {
	'HF_MODEL_ID':'tiiuae/falcon-7b-instruct',
	'SM_NUM_GPUS': json.dumps(1),
    'MAX_TOTAL_TOKENS': json.dumps(2048),
}

And that seems to have increased the output