Hi all,
I’m currently working through the Hugging Face AI Agents course, specifically the section on building a CodeAgent
using smolagents
. In the lesson example, the CodeAgent
performs a DuckDuckGo search for:
agent.run("Search for the best music recommendations for a party at the Wayne's mansion.")
The course example shows that the agent returns actual music recommendations in the results. However, when I run the same code locally, the search results are completely different, mostly grammar questions from the English Language Stack Exchange. Here’s what I’m running:
from smolagents import CodeAgent, DuckDuckGoSearchTool, InferenceClientModel
agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=InferenceClientModel())
agent.run("Search for the best music recommendations for a party at the Wayne's mansion.")
And here’s an bit of what I’m getting back in the logs:
## Search Results
["What was best" vs "what was the best"? - English Language Stack Exchange](...)
[adverbs - About "best", "the best", and "most" - English Stack Exchange](...)
["Which one is the best" vs. "which one the best is"](...)
These results seem unrelated to the intent of the query. I’m wondering if:
- The search tool behavior changed since the course was recorded?
- There’s a bug or config issue with DuckDuckGoSearchTool()?
- I need to adjust headers, cookies, or request formatting for the tool to work as expected?
Any advice would be greatly appreciated! Thanks for making such a great course. I’m learning a ton and hoping to continue building with this.
1 Like
Update on my issue with CodeAgent / search results:
Before I even posted this thread, I had already spent 10 to 15 hours trying to get the lesson working as intended. Since then, I’ve kept experimenting, but I’m still running into a lot of inconsistent and unreliable behavior that makes it hard to complete the example successfully.
Here’s what I’ve tried:
- Reworded the prompt many times (from vague to very specific), especially to guide the agent away from tool misuse or hallucinations
- Tried using other models, both local and hosted
- Created custom tools to fetch and extract content from web pages
- Watched the output at each step to debug parsing issues, formatting errors, and tool logic
- Upgraded my Hugging Face plan after running out of tokens, just to keep testing
Also, just to clarify based on an earlier suggestion: I do have duckduckgo_search
installed, so the agent is using the intended search backend and not falling back to mock behavior.
The most frustrating part is that I actually got this working once — the very first day I tried the lesson. But after I ran out of free tokens (possibly during the tools section), I couldn’t reproduce the success. Even after upgrading to a paid plan, the same prompt started failing in multiple ways. That’s when I started trying different models, tweaking prompts, and building fallback tools. Since then:
- The agent frequently tries to call tools that are either forbidden or hallucinated, like
visit_webpage()
or open_link()
- When tools do run, it often fails to extract or parse the data correctly
- Switching models has not improved reliability. The failures just change format, such as timeouts, bad extractions, or inconsistent step outputs
At this point, I’m also starting to question whether I should continue the course. I want to feel like the things I’m learning will be usable after the course ends, and right now it’s hard to tell what is actually portable versus what only works in this tightly coupled lesson environment. If the rest of the course depends on the same underlying architecture, I’m worried about running into more of the same issues.
That said, I’m planning to check out the LlamaIndex section next. I’ve heard it’s more stable, and I’ll circle back to this lesson later if things improve.
If anyone has managed to get this lesson working in a reliable way, especially with usable song or playlist output, I’d really appreciate hearing how you approached it.
Hopefully this helps save someone else some time if they run into similar issues. Thank you again to the Hugging Face team for building the course and for supporting discussion around it.
1 Like