But is there even a single model working here?!

And now the models
black-forest-labs/FLUX.1-dev
black-forest-labs/FLUX.1-schnell
stabilityai/stable-diffusion-3.5-large-turbo
aren’t working either—they were some of the few that were still usable.

If Hugging Face is just a platform to make money, that’s fine, but they should say it outright. No more talk of democratization, open source, or any of that, because in that sense, HF is absolute :poop:.

5 Likes

black-forest-labs/FLUX.1-dev worked when I tried it just now. However, it seems to be slow. I think there must be some kind of problem…:cold_face:

Those were the few models that were working, but now almost all models have stopped…

1 Like

Confirmed - even “stabilityai/stable-diffusion-xl-base-1.0” stopped working.

And I’m with javarribas: HF-staff: if this platform turns into something to make money - be honest about that. Willing to pay here - if I get a stable (!!) dev enviroment without breaking changes left right and center.

I understand that servers cost money - but right now HF for me is completely bonkered.

4 Likes

I’m getting this Error An error occurred while fetching the blob while using the model facebook/bart-large-mnli with NodeJS (v20.19.0)

Error: An error occurred while fetching the blob
    at innerRequest (/Users/vt/test/node_modules/@huggingface/inference/dist/index.cjs:1737:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async zeroShotClassification (/Users/vt/test/node_modules/@huggingface/inference/dist/index.cjs:2235:25)
    at async classifyText (/Users/vt/test/test.js:27:22)
    at async extractFilterCriteria (/Users/vt/test/test.js:42:24)
    at async main (/Users/vt/test/test.js:117:32)

:red_circle: This is how I call the model facebook/bart-large-mnli

async function classifyText(text, labels) {
    const response = await hf.zeroShotClassification({
        model: 'facebook/bart-large-mnli',
        inputs: text,
        parameters: { candidate_labels: labels },
        provider: 'hf-inference'
    });
    return response;
}

:red_circle: I have also tried cURL, and got the 504 Gateway Timeout.

curl https://router.huggingface.co/hf-inference/models/facebook/bart-large-mnli \
    -X POST \
    -d '{"inputs": "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!", "parameters": {"candidate_labels": ["refund", "legal", "faq"]}}' \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer HF_API_KEY'

:green_circle: This call to sentence-transformers/all-MiniLM-L6-v2 works fine:

async function generateEmbeddings(texts) {
    const results = await hf.featureExtraction({
        model: "sentence-transformers/all-MiniLM-L6-v2",
        inputs: texts,
        provider: 'hf-inference'
    });
    return results;
}
1 Like

Actually, it has nothing to do with blobs, you probably use old version of HF Inference client: Inference API stopped working - #40 by TOOTHED. And yes, new version will cause “No Inference Provider available for model”.

Most likely it is connected to this update:

So… Seems that it is not just the error, but changes in strategy. As mods reply to the issues as fixed

2 Likes