In chapter 2, when running ToolCallingAgent, I encountered an error, which I hope I could get some clarity on.
Background info:
- I’m using my local machine, not Colab.
- I’m using OpenAIServerModel instead of HfApiModel.
- Model I’m using is “qwen25-coder-32b-instruct”.
Problem:
If I create an agent instance and then ran it with some text, I get a bad request error.
Error in generating tool call with model:
Error code: 400 - {'id': '', 'object': '', 'created': 0, 'model': '', 'choices': None, 'usage': {'prompt_tokens': 0, 'completion_tokens': 0, 'total_tokens': 0,
'prompt_tokens_details': None, 'completion_tokens_details': None}, 'system_fingerprint': ''}
[Step 1: Duration 2.41 seconds]
Traceback (most recent call last):
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/smolagents/agents.py", line 1007, in step
model_message: ChatMessage = self.model(
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/openinference/instrumentation/smolagents/_wrappers.py", line 287, in __call__
output_message = wrapped(*args, **kwargs)
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/smolagents/models.py", line 1075, in __call__
response = self.client.chat.completions.create(**completion_kwargs)
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/openai/_utils/_utils.py", line 279, in wrapper
return func(*args, **kwargs)
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/openai/resources/chat/completions/completions.py", line 914, in create
return self._post(
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/openai/_base_client.py", line 1242, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/openai/_base_client.py", line 919, in request
return self._request(
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/openai/_base_client.py", line 1023, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'id': '', 'object': '', 'created': 0, 'model': '', 'choices': None, 'usage': {'prompt_tokens': 0, 'completion_tokens': 0, 'total_tokens': 0, 'prompt_tokens_details': None, 'completion_tokens_details': None}, 'system_fingerprint': ''}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/Niv/Workspace/ai-agent/main.py", line 116, in <module>
agent.run("search for best music recommendations for a 80's themed party")
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/openinference/instrumentation/smolagents/_wrappers.py", line 128, in __call__
agent_output = wrapped(*args, **kwargs)
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/smolagents/agents.py", line 323, in run
return deque(self._run(task=self.task, max_steps=max_steps, images=images), maxlen=1)[0]
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/smolagents/agents.py", line 337, in _run
raise e
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/smolagents/agents.py", line 334, in _run
final_answer = self._execute_step(task, memory_step)
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/smolagents/agents.py", line 358, in _execute_step
final_answer = self.step(memory_step)
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/openinference/instrumentation/smolagents/_wrappers.py", line 163, in __call__
result = wrapped(*args, **kwargs)
File "/opt/miniconda3/envs/ml-train/lib/python3.10/site-packages/smolagents/agents.py", line 1014, in step
raise AgentGenerationError(f"Error in generating tool call with model:\n{e}", self.logger) from e
smolagents.utils.AgentGenerationError: Error in generating tool call with model:
Error code: 400 - {'id': '', 'object': '', 'created': 0, 'model': '', 'choices': None, 'usage': {'prompt_tokens': 0, 'completion_tokens': 0, 'total_tokens': 0, 'prompt_tokens_details': None, 'completion_tokens_details': None}, 'system_fingerprint': ''}
Following is the code:
MODEL_ID = "qwen25-coder-32b-instruct"
######## do it using tool calling agent ########
agent = ToolCallingAgent(tools=[DuckDuckGoSearchTool()], model=OpenAIServerModel(
model_id=MODEL_ID,
api_base="https://api.lambdalabs.com/v1",
api_key=os.getenv("INFERENCE_API_KEY")
))
agent.run("search for best music recommendations for a 80's themed party")
######### end #########
However, if I do it by manually constructing the prompt and calling the tool, it works.
Following is the code that works, but not using ToolCallingAgent.
model = OpenAIServerModel(
model_id=MODEL_ID,
api_base="https://api.lambdalabs.com/v1",
api_key=os.getenv("INFERENCE_API_KEY")
)
messages = [
{
"role": "system",
"content": (
"Please output a valid JSON object for a tool call with the following schema:\n"
"{\"tool\": <tool name>, \"input\": {\"query\": <search query>}}\n\n"
"For example, if calling DuckDuckGoSearchTool, output:\n"
"{\"tool\": \"DuckDuckGoSearchTool\", \"input\": {\"query\": \"best music recommendations for a party\"}}\n\n"
"Now, please output the JSON for the following query:\n"
"\"best music recommendations for a party at Wayne Mansion.\""
)
}
]
response = model(messages)
json_str = response.content
try:
tool_call = json.loads(json_str)
except json.JSONDecodeError as e:
print("Failed to parse JSON output:", e)
exit(1)
tool_name = tool_call.get("tool")
tool_input = tool_call.get("input", {})
print(f"Parsed tool call: tool: {tool_name}, input: {tool_input}")
if tool_name == "DuckDuckGoSearchTool":
search_tool = DuckDuckGoSearchTool()
query = tool_input.get("query")
if query:
search_result = search_tool(query)
print("Search result from DuckDuckGoSearchTool:")
print(search_result)
else:
print("No query provided in the tool call input.")
else:
print(f"Unknown tool: {tool_name}")
What am I doing wrong when using ToolCallingAgent? Or is there a known issue with ToolCallingAgent?