500 Internal Server Error with Inference Endpoint

Hi, I’m very new to this and I’m trying to get an Inference Endpoint working, but I’m encountering a 500 Internal Server Error. The endpoint log has the model initialize successfully, and this error is returned instantly. I’ve tested the EndpointHandler locally and it works locally, returning the expected result.

My handler.py and requirements.txt is here: https://huggingface.co/pwaldron/conroy-test
I’m using the following python to test:

import requests, json
import base64

API_URL = "XXXX"
headers = {
    "Authorization": "Bearer hf_XXXX",
    "Content-Type": "application/json",
    "Accept": "image/png"
}

with open("input.png", "rb") as image_file:
    b64_image = base64.b64encode(image_file.read())

# prepare sample payload
payload = {
    "inputs": "A pencil, donut and apple on a table",
    "image": b64_image.decode("utf-8")
}

def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.content

output = query(payload)
print(output)
1 Like

I also see 500 internal server error using the code suggested next to the playground. Is there a server issue?

1 Like

Same for me… And nothing about it in the logs only the analytics page shows the 500 errors

1 Like

Just to share a quick update. I got it to work, but unsure exactly what may have caused the issue. I added the following to the requirements.txt:

  • transformers
  • diffusers
  • accelerate
  • mediapipe

After updating the endpoint, it started to work! But after removing them from requirements.txt again and testing on another endpoint, the endpoint just worked anyway! So I’m not sure what changed.

@pwaldron n: This has nothing to do with your configuration. An internal server error response when calling the API is of course on the HF side: Inference API down? - #9 by pd-t