Tool/Function calling abilities of LLM's that are used locally pulled through ollama

i was trying to build a small AI agent that would query the DB and get the details of the customers, for which i tried many models that are available in the ollama model library, but every model keeps throwing an “invalid tool”, or keeps using the irrelevant tool or keeps hallucinating and giving back made up answers!!! is this an issue that is common when pulling and running LLM’s locally using OLLAMA, when i use the paid Gemini API from google cloud, it works so well (uses the correct tool’s, and returns the exact correct answer), i need help in understanding what is happening when i use a locally run LLM, and is there anyway to make the Local LLM work like the Gemini API??

Thanks in advance

1 Like

If you are using Ollama directly without any Agent framework, the models that support tool calling are limited, and there seems to be an issue that is not a bug.

As a workaround, you could use Ollama through external Agent frameworks.

1 Like

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.