try change the tool description, so it only will take one word as Argument.
Args:
occasion: One word representung the type of occasion for the party menu.
try change the tool description, so it only will take one word as Argument.
Args:
occasion: One word representung the type of occasion for the party menu.
Hello!
does anyone know, or can point me to any documentation on the following regard?
I see in the examples from HF course Lesson 1 and lesson 2, specifically in the Alfred agent, (sergiopaniego/AlfredAgent at main) that there are the agent.json and the promts.yaml files, and I canāt understand why they are for and how they work. I did multiple tests, and I see the same behaviour in my agents using or not using those two files. Any help will be appreciated.
Also, a part from knowing what they are for and how they effect the agent, is there any template that should be followed to create those files in terms of the structure or allowed keys?
Thank you.
Iāll answer myself.
So the prompts.yaml works as expected, but there is no changes when testing it with or without it, because the CodeAgent, automatically loads the same prompt template from its own resource in the smolagents library when there is no value passed in the prompt_templates argument.
Now, for the agent.json seems to be from a functionality that exports your agent to a json file, called basically when you push to HF repo (push_to_hub which already creates the prompts.yaml as well) so seems a way to be able to re-load it later on from a folder or from the hub itself.
In the langfuse logging of the agent of 2.1 ā Building Agents That Use Codeā you can see that there is an import error of the āreā module. I have already created an issue on github, but wanted to document it here too for anybody wondering.
Amazing though how the agent handles the error and still responds with a valid answer.
hello everybody, happy new year. I am an impostor coder trying to learn.
In smolagents āBuilding Agents That Use Codeā cell 12, Iām getting the following error - would appreciate help please:
from smolagents import CodeAgent, HfApiModel, Tool
image_generation_tool = Tool.from_space(
āblack-forest-labs/FLUX.1-schnellā,
name=āimage_generatorā,
description=āGenerate an image from a promptā
)
model = HfApiModel(āQwen/Qwen2.5-Coder-32B-Instructā)
agent = CodeAgent(tools=[image_generation_tool], model=model)
agent.run(
āImprove this prompt, then generate an image of it.ā,
additional_args={āuser_promptā: āA grand superhero-themed party at Wayne Manor, with Alfred overseeing a luxurious galaā}
)
I am a bit lost, on the vision agents we are using gpt-4oā¦ but how do I get a free API key? Or the idea is not to run the example?
404 page for notebook link from here
have you tried this? Chapter 2 questions - #66 by gael1130
How do I get a Free Api key to do the example?
How do I get a Free gpt 4o Api key to do the example?
ok, here Iāve made mine
Issue with CodeAgent Initialization in Multi-Agent System (Chapter 2) of Hugging Face Agents Course
Hi, I am encountering multiple errors while working through Chapter 2 of the Agents Course on Hugging Face. Specifically, Iām facing issues in the section Splitting the Task Between Two Agents.
I am using LiteLLMModel
as shown below:
from smolagents import LiteLLMModel
from dotenv import load_dotenv
load_dotenv()
messages = []
model = LiteLLMModel("gemini/gemini-2.0-flash-exp", temperature=0.2)
When I run the following code block(the tool: calculate_cargo_travel_time
is already defined in earlier section):
from smolagents import (
CodeAgent,
DuckDuckGoSearchTool,
VisitWebpageTool
)
web_agent = CodeAgent(
model=model,
tools=[
DuckDuckGoSearchTool(),
VisitWebpageTool(),
calculate_cargo_travel_time,
],
name="web_agent",
description="Browses the web to find information",
verbosity_level=0,
max_steps=10,
)
I get the following error:
TypeError: MultiStepAgent.__init__() got an unexpected keyword argument 'final_answer_checks'
Additionally, I encountered a similar issue when commenting out the name
parameter, which results in the same error for the description
parameter.
Also, in another section of the code:
from smolagents.utils import encode_image_base64, make_image_url
from smolagents import OpenAIServerModel
def check_reasoning_and_plot(final_answer, agent_memory):
# Function definition here
output = multimodal_model(messages).content
print("Feedback: ", output)
if "FAIL" in output:
raise Exception(output)
return True
manager_agent = CodeAgent(
model=model,
tools=[calculate_cargo_travel_time],
managed_agents=[web_agent],
additional_authorized_imports=[
"geopandas",
"plotly",
"shapely",
"json",
"pandas",
"numpy",
],
planning_interval=5,
verbosity_level=2,
final_answer_checks=[check_reasoning_and_plot],
max_steps=15,
)
I receive the error:
TypeError: MultiStepAgent.__init__() got an unexpected keyword argument 'final_answer_checks'
Could anyone help me resolve these issues or provide guidance on how to fix them? Thank you in advance!
Possibly version issue?
I am getting this error while running the code in colag.
Error in generating model output:
402 Client Error: Payment Required for url:
https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID:
Root=1-67d1d83b-444897b474d9f00a6aca3e66;f5647a7f-2be3-4b1c-8f87-c825587e4044)
You have exceeded your monthly included credits for Inference Providers. Subscribe to PRO to get 20x more monthly
included credits.
[Step 1: Duration 0.23 seconds
There are two possibilities for a 402 error: either there is a problem on the HF side, or it is a normal result that simply exceeds the available amount.
I am getting the same error as well. Is there a requirement to subscribe to PRO? Would like to understand the cost implications of such a subscription.
Itās because I have hit Hugging Faceās free quota limit for the Qwen2.5-Coder-32B-Instruct
model. Either I have to subscribe to their PRO plan for more credits, or run the model locally.
I am trying to run AlfredAgent its giving me error Step 1
Error
Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://router.huggingface.co/hf-inference/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: Root=1-67d497ef-6276d83d2d38b86f1ab93d5a;2925a3a7-2d62-421a-b690-3f88caa0d4b5)
Input validation error: inputs
tokens + max_new_tokens
must be <= 32768. Given: 250290 inputs
tokens and 0 max_new_tokens
{āerrorā:āInput validation error: inputs
tokens + max_new_tokens
must be <= 32768. Given: 250290 inputs
tokens and 0 max_new_tokens
ā,āerror_typeā:āvalidationā}