Hello everyone,
I have been trying to delpoy anymodality/llava-v1.5-7b on sagemaker as shown in the noteboko found here deploy_llava.ipynb · anymodality/llava-v1.5-7b at main.
The model is deployed howver, i get an error whenever i perform inference.
I have been trying and searching a lot but still the same error.
Here is what i get:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{ “code”: 400, “type”: “InternalServerException”, “message”: “GET was unable to find an engine to execute this computation” }