Good day all,
We’ve been using the protbert model with a local binary on our own compute, but we’d like to migrate to the paid HuggingFace Inference API.
I haven’t found any reference for how to format the request body, and my failed attempt looks something like:
curl --location --request POST 'https://api-inference.huggingface.co/models/Rostlab/prot_bert' \
--header 'Authorization: Bearer api_org_GE...' \
--header 'Content-Type: application/json' \
--data-raw '"A K G E [MASK]"'
The response is a 500 error with an obscure error message:
{
"error": "None"
}
I made sure the API token is valid by running the mask filling pipeline with a different model, like this:
curl --location --request POST 'https://api-inference.huggingface.co/models/distilbert-base-uncased' \
--header 'Authorization: Bearer api_org_GE...' \
--header 'Content-Type: application/json' \
--data-raw '"Hello [MASK]"'
Which returned the expected response:
[
{
"sequence": "[CLS] hello! [SEP]",
"score": 0.7346376776695251,
"token": 999,
"token_str": "!"
},
...
]
What could we be missing in our inference request with protbert? Or could this perhaps be a bug of sorts?
Thank you for any help.