First I thought we were using llama - 3.3-70b-Instruct as I had to request for the model. Then I saw that hugging face uses quen model as default.
1 Like
I think it depends on the version of smolagents you are using, but if we do not specify anything, the model specified in the library source code will be used by default.
If you specify a model, the specified model will be used.
... )
>>> messages = [{"role": "user", "content": "Explain quantum mechanics in simple terms."}]
>>> response = engine(messages, stop_sequences=["END"])
>>> print(response)
"Quantum mechanics is the branch of physics that studies..."
```
"""
def __init__(
self,
model_id: str = "Qwen/Qwen2.5-Coder-32B-Instruct",
provider: str | None = None,
token: str | None = None,
timeout: int = 120,
client_kwargs: dict[str, Any] | None = None,
custom_role_conversions: dict[str, str] | None = None,
api_key: str | None = None,
bill_to: str | None = None,
base_url: str | None = None,
**kwargs,
):
1 Like
Yeah, but why did we request to llama - 3.3-70b-Instruct if we already had qwen as default?
1 Like
Dudes its gonna get interesting.
1 Like
I raised an issue for now.
opened 11:14AM - 15 Jun 25 UTC
hands-on-bug
**Describe the bug**
https://github.com/huggingface/agents-course/issues/510
Alt… hough the above change means that approval for the Llama model is no longer required to use the Notebook, there seems to be some confusion because the documentation has not been updated.
https://github.com/huggingface/agents-course/blob/main/units/en/unit1/what-are-llms.mdx
> You also need to request access to the Meta Llama models.
https://discuss.huggingface.co/t/which-ai-model-we-will-be-using-in-this-course/159351
https://github.com/huggingface/agents-course/issues/536
Llama may be necessary in other parts of the course, so I will raise it as an issue for now.
1 Like
cool, I thought I was asking very dumb question xD
1 Like