Mage has actually been around for a few years now, at least 2-3, if not 4 years. I discovered it from a reddit post detailing various websites that had integrated AI image generation in some capacity. As far as features go, the following screenshots list the various plans for Mage and their features:
I currently use the Pro plan, and while it is pricy, the amount of features and benefits you have access to make it well worth it. I personally like some of the exclusive models, such as the Mage-exclusive Illustrious/NoobAI model MagnoliaMix, which helped me produce this image, which was enhanced right within Mage:
Any updates on this issue?
I’m using the JS InferenceClient with the sentence-transformers/all-mpnet-base-v2 model and still getting an error. It was working fine 24hrs ago and now it stopped working.
I also tried other models like: sentence-transformers/all-MiniLM-L6-v2 and jinaai/jina-embeddings-v3 They also fail
For those unable to use HF, go to Mistral and get a free account to get an API key. Then use this class, it will simuate the results getting back from InferenceClient when you use chat_stream().
Current status: none of the text-2-image models work anymore - no matter if I clone a model to my account or if I use any of the big players (yntec / digiplay)
I’m guessing that’s a side effect from the 404 error; I played around a bit with PHP and run into the 404 brickwall everytime I tried to use anything beside “stabilityai/stable-diffusion-xl-base-1.0” - that one even works on the testground!
I slapped a little testground together here:
The sad thing: I don’t see any error. The build runs fine, no errors - but if I hit “generate”: nothing (not even an error!!). This was working on 25-04-06 - some time after that, HF had the glorious idea to “improve” something that broke all image generation.
HF staff: if this only works on a payed account: fine! Tell me and I’m game. But more than 3 weeks without any real feedback is just… bad.
I added a little debug to the build process: these are the (py)-modules the space loads:
The cause of this large-scale outage may be hardware replacement. I think it happened when the A100 in the Zero GPU space was replaced with H200. Probably other services as well?
In the long term, I think Hugging Face was unable to handle the excessive number of Inference API requests as a company. julien-c mentioned something to that effect somewhere on Hub.
The implementation of InferenceClient itself has changed significantly…
The implementation of Gradio’s external.py has also changed to use InferenceClient in the new version.
That being said, after recovering from a large-scale failure, even SmolLM2 is not deployed at this point.
Hi, I think I have the same problem.
I’ve published a model Eddy872/zoove-t5, and everything seems correctly configured:
The repository is public
I’ve added the proper metadata at the top of the README.md
The model works perfectly with the pipeline() method in Python
However, the Inference API still returns a 404 error when calling
I also tried using the InferenceApi() client in Python with raw_response=True, and still got a 404.
Is there anything else I need to do to trigger the activation of the hosted Inference API for my model?
Same problem appears today for me with my personal text generation model.
Originally, I used old (2.8.1) version of the @huggingface/inference JS package - “Error fetching from Hugging Face API: An error occurred while fetching the blob” while trying to send text input.
Updated the package to the last version (3.12.1) - the error have changed - “No Inference Provider available for model …”
Turn back to the old version, tried to specified the blob format (as this solution earlier worked with whisper audio during this API hell of the month) - “Unexpected token ‘N’, “Not Found” is not valid JSON”
Tried to send raw HTTP request - Error 404: Model not found.
Yeah, something is going very wrong since 15 of April. Probably, the only solutions are just wait or switch to a different platform (which is not the case for my friends-only model experiments, so I just believe)
Anyone has a solution for this? I am presenting my final year project in 2 days and have a model hosted on HF and all of a sudden I can not use it anymore. I would greatly appreciate any help thank you!