Hmm, in that case, do you need to update LlamaIndex, or has it become unusable due to further specification changes…?
I think the model itself is deployed via Inference Provider .
However, if you are not particularly attached to that model, it might be better to look for an alternative. More detailed information is available in the Agents course channel on Hugging Face Discord.
Alternative API Endpoints / local models for smolagents
I try to learn the basics of smolagents and I got the following big problem - please help!
I am getting the message that I have run out of the free tier for HfApiModel, and I need to buy the paid tier.
How can I use the local model to run with my CodeAgent in smolagents?
I just posted in the Discord as well, but figured I’d post over here for those who are only checking one or the other.
Hi all, I have been reading a lot of questions around what to do if the examples for use the HfApiModel fail, or you run out of credits. I was in a similar situation, and went down the path of running locally to begin with using the MLXModel class and Qwen2.5-Coder-32B, but that was leading to very long waits even with my maxed out M4 Max. So I wanted to share another solution…