Has anyone tried using the SmolAgents helium example with open-source models, specifically the Llama3 70B instruct model? I came across this We now support VLMs in smolagents!, and when I tried running it, I ran into context overflow issues, which does make sense since local models don’t have enough context compared to claude. I’m curious how others handle context in situations like this specifically for web search. Any tips or solutions you’ve found for working around it?
1 Like
There was an example.
As for smolagents, there is a course and developer community on HF Discord, so I recommend asking there.