401 Client Error

Hello, I am very new to langchain and i am facing an error. Let me show my entire code and the error.
Code:
`from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint
from dotenv import load_dotenv

load_dotenv()

llm = HuggingFaceEndpoint(
repo_id=‘TinyLlama/TinyLlama-1.1B-Chat-v1.0’,
task=“text-generation”,
)

model = ChatHuggingFace(llm=llm)

result = model.invoke(“What is the Capital of India?”)

print(result.content) Error: Traceback (most recent call last):
File “C:\Users\HP\Desktop\LangChain\venv\lib\site-packages\huggingface_hub\utils_http.py”, line 409, in hf_raise_for_status
response.raise_for_status()
File “C:\Users\HP\Desktop\LangChain\venv\lib\site-packages\requests\models.py”, line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://router.huggingface.co/hf-inference/models/TinyLlama/TinyLlama-1.1B-Chat-v1.0/v1/chat/completions

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “C:\Users\HP\Desktop\LangChain\ChatModels\chatmodel_hf.py”, line 13, in
result = model.invoke(“What is the Capital of India?”)
File “C:\Users\HP\Desktop\LangChain\venv\lib\site-packages\langchain_core\language_models\chat_models.py”, line 284, in invoke
self.generate_prompt(
File “C:\Users\HP\Desktop\LangChain\venv\lib\site-packages\langchain_core\language_models\chat_models.py”, line 860, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File “C:\Users\HP\Desktop\LangChain\venv\lib\site-packages\langchain_core\language_models\chat_models.py”, line 690, in generate
self._generate_with_cache(
File “C:\Users\HP\Desktop\LangChain\venv\lib\site-packages\langchain_core\language_models\chat_models.py”, line 925, in _generate_with_cache
result = self._generate(
File “C:\Users\HP\Desktop\LangChain\venv\lib\site-packages\langchain_huggingface\chat_models\huggingface.py”, line 370, in _generate
answer = self.llm.client.chat_completion(messages=message_dicts, **kwargs)
File “C:\Users\HP\Desktop\LangChain\venv\lib\site-packages\huggingface_hub\inference_client.py”, line 956, in chat_completion
data = self._inner_post(request_parameters, stream=stream)
File “C:\Users\HP\Desktop\LangChain\venv\lib\site-packages\huggingface_hub\inference_client.py”, line 321, in _inner_post
hf_raise_for_status(response)
File “C:\Users\HP\Desktop\LangChain\venv\lib\site-packages\huggingface_hub\utils_http.py”, line 481, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 401 Client Error: Unauthorized for url: https://router.huggingface.co/hf-inference/models/TinyLlama/TinyLlama-1.1B-Chat-v1.0/v1/chat/completions (Request ID: Root=1-67c0aa28-0237850d7483818d72a13bd6;7c29eb7d-2c6e-4624-ad02-f44ad003c9da)

Invalid username or password.`

1 Like

Since this is a function that uses an online service, you will need to set up the Hugging Face token in advance. If you want to run the model locally, please use the following method. Also, if you are not sure, it is easier to use Ollama than LangChain.

It’s resolved yesterday itself. Sometimes I’m getting it and sometimes the error is not showing up

1 Like