Now I develop agentic AI program and I use ChatHuggingFace in LangChain.
Here is my code.
from langchain_community.llms import HuggingFaceHub
llm = HuggingFaceHub(repo_id="meta-llama/Llama-3.2-3B-Instruct",
huggingfacehub_api_token="hf_xxxxxxxxx",
model_kwargs={"temperature": 0.1, "max_new_tokens": 3000},
)
from langchain_community.chat_models import ChatHuggingFace
llm = ChatHuggingFace(llm=llm)
At that time error happened.
Cannot access gated repo for url https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct/resolve/main/config.json .
Access to model meta-llama/Llama-3.2-3B-Instruct is restricted and you are not in the authorized list. Visit meta-llama/Llama-3.2-3B-Instruct · Hugging Face to ask for access.
how can I handle?
1 Like
Some models are called gated models, and you need to get permission for them individually. Once you have permission, you can use them normally by giving your HF read token.
Thank you.
And I encounter another issue. So I load this model .
from langchain_community.llms import HuggingFaceHub
hf = HuggingFaceHub(repo_id="meta-llama/Llama-3.2-1B-Instruct",
huggingfacehub_api_token="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",
model_kwargs={"temperature": 0.1, "max_new_tokens": 3000},
)
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.tools import tool
import requests
import json
tv_search = TavilySearchResults(max_results=3, search_depth='advanced',
max_tokens=10000)
@tool
def search_web(query: str) -> list:
"""Search the web for a query."""
tavily_tool = TavilySearchResults(max_results=2)
results = tavily_tool.invoke(query)
return results
from langchain_community.chat_models import ChatHuggingFace
llm = ChatHuggingFace(llm=hf , model_id='meta-llama/Llama-3.2-1B-Instruct')
tools = [search_web]
llm.bind_tools(tools)
But bind_tools NoImplementation Error happen.
1 Like
Try
pip install -U langchain langchain_huggingface
1 Like
I tried it.
My langchain version is 0.3.11.
But not work.
1 Like
Perhaps unresolved issue?
You may also be able to use HuggingFace models via Ollama instead ChatHuggingface.
https://api.python.langchain.com/en/latest/llms/langchain_ollama.llms.OllamaLLM.html
1 Like
So is there any other method to use llama3.2 with tools in langchain?
give me example to use llama3.2 in langchain_ollama.
1 Like
But Ollama is local llm. But I don’t want to download model. So any online method?
1 Like
I see. In that case, there’s nothing to do but get the library working somehow, but until we know the cause…
Edit:
Wow…
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_hub.HuggingFaceHub.html
Deprecated since version 0.0.21: Use langchain_huggingface.HuggingFaceEndpoint
instead.
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.
1 Like
Thank you for your great contribution again!
You are always great help for me.
I expect more helping from you.
1 Like
@spencer1129 kind reminder to rotate your HF Token since it is public. Thanks!
1 Like
system
Closed
December 14, 2024, 3:40am
15
This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.