Serverless Inference API [error 500]

HF Inference endpoint @ hexgrad/Kokoro-82M · Hugging Face throws Internal server error

When calling it with correct credentials. (I see it count against my daily quota).

I suspect its something misconfigured on the model’s creator side?

I brought it up there and the creator isn’t sure and referred me to bring it up with HF. And if it’s on his side to report it back.

Can someone here help me pin-point the issue?

Thank you very much!

1 Like

In order to use a model via the Serverless Inference API, it must correspond to one of the libraries built into the HF server, and this is recognized to some extent automatically. In cases of ambiguity, it is necessary to specify the library_name as shown below, but currently this is not specified. Also, it seems that Kokoro does not support transformers…

HF staff may know of some more precise means…

---
license: apache-2.0
language:
  - en
base_model:
  - yl4579/StyleTTS2-LJSpeech
pipeline_tag: text-to-speech
library_name: transformers
---

thank you John!

1 Like