How to deploy smolagents locally

I’m using dolphin-phi2.7b with ollama in terminal [on fedora linux] - no GUI helps RAM usage, and since i’m using a little old HP intel laptop thing it has no dedicated GPU or anything, I need every little bit of help I can get.
If you are in a similar boat, I recommend using linux and going the ollama route, you can run it locally and it’s not fast but it works.
If you’re on windows ollama in powershell worked for me too, I tried tinyllama and phi mini [quantized]
I am trying to figure out if it’s even feasible for me to use hugging with transformers etc, I think maybe it’s pushing it too hard, but it might be ok with the smol

You can prob find lots of guides, or ask one of the models to guide you through the process?

1 Like