Hi, I’m very new to this and I’m trying to get an Inference Endpoint working, but I’m encountering a 500 Internal Server Error. The endpoint log has the model initialize successfully, and this error is returned instantly. I’ve tested the EndpointHandler locally and it works locally, returning the expected result.
My handler.py and requirements.txt is here: https://huggingface.co/pwaldron/conroy-test
I’m using the following python to test:
import requests, json
import base64
API_URL = "XXXX"
headers = {
"Authorization": "Bearer hf_XXXX",
"Content-Type": "application/json",
"Accept": "image/png"
}
with open("input.png", "rb") as image_file:
b64_image = base64.b64encode(image_file.read())
# prepare sample payload
payload = {
"inputs": "A pencil, donut and apple on a table",
"image": b64_image.decode("utf-8")
}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.content
output = query(payload)
print(output)