HuggingChat with Sagemaker inference

I have a fine-tuned model I have deployed to SageMaker. No obvious errors with the deployment.

I have deployed HuggingChat via docker on an EC2 instance in my AWS account.

When I hit the HuggingChat port with my browser, I get a 500.
When I use the default mistral model with HC, it is fine.

There are no clear errors from the docker logs for the HC container.
EDIT: I do get one opaque looking error when I hit the interface

{“level”:50,“time”:1720977052720,“pid”:23,“hostname”:“b8a5c6e52a28”,“locals”:{“sessionId”:“25ca5583e75d9f35c6cbe05a3ad866714d0dc4f72e88a533110a817df9f15661”},“url”:“https://x.x.x.x:3000/",“params”:{},“request”:{},“error”:{“lineNumber”:1,“columnNumber”:1},“errorId”:"9deb4e3a-c533-45b6-9e92-be021ec0acfe”}

Any thoughts welcome.

My .env.local looks like this:

MONGODB_URL=mongodb://localhost:27017
HF_TOKEN=redacted
MODELS=[{ "name":"ModelName redacted", "displayName":"YourModel", "description":"Yourdescription", "parameters":{ "max_new_tokens":4096 }, "endpoints":[ { "type":"aws", "service":"sagemaker" "url":"https://runtime.sagemaker.us-east-1.amazonaws.com/endpoints/huggingface-pytorch-tgi-inference-2024-07-14-13-13-57-737/invocations", "accessKey":"redacted", "secretKey":"redacted", "sessionToken":"", "region":"us-east-1", "weight":1 } ] }]