I'm having an error message working with my User access tokens

i’m working on chatpdf and i got this error while working on HuggingFaceHub_api_keys.

llm = HuggingFaceHub(repo_id=‘OpenAssistant/oasst-sft-1-pythia-12b’)
chain = RetrievalQA.from_chain_type(llm=llm, chain_type=“stuff”, retriever=knowledge_base.as_retriever())
response = chain.run(question)

st.success(“Completed question.”)
st.write("Answer: ", response)

File “C:\Users\HP\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\llms\huggingface_hub.py”, line 112, in _call
raise ValueError(f"Error raised by inference API: {response[‘error’]}")

ValueError: Error raised by inference API: Authorization header is correct, but the token seems invalid

1 Like

I just got the same error message. Did you ever resolve this?

yes
i created the env. and placed the token in it…
then i called it this way: os.environ[“HUGGINGFACEHUB_API_TOKEN”] = “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”

hey there ! am using huggingface api and encountering same problem , am using react how can i create a .env file for that

Hello, I am following an introductory video from youtube titled:

Hugging Face + Langchain in 5 mins | Access 200k+ FREE AI models for your AI apps

Everything seems to work well, but I encountered this error message related to the bearer token: {“error”:“Authorization header is correct, but the token seems invalid”}
I already tried by creating another token but the error continues.
If anybody could find a solution I would be very grateful.

Currently having a similar issue on my end here

from huggingface_hub import login

login(config['huggingface_api'])

import getpass
from transformers import HfAgent

agent_star = HfAgent(
    "https://api-inference.huggingface.co/models/bigcode/starcoder"
)
text = "this is new building with 14 storeys painted green."
storey_1 = agent_star.run("What is the number of storeys?", text=text)

yet i’m getting the following error message

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "c:\Users\tolu.olusooto\Documents\Nakina\.venv\lib\site-packages\transformers\tools\agents.py", line 341, in run
    result = self.generate_one(prompt, stop=["Task:"])
  File "c:\Users\tolu.olusooto\Documents\Nakina\.venv\lib\site-packages\transformers\tools\agents.py", line 649, in generate_one
    raise ValueError(f"Error {response.status_code}: {response.json()}")
ValueError: Error 400: {'error': 'Authorization header is correct, but the token seems invalid'}

Any advice on this would be greatly appreciated.

I was having the same issue, following a LangChain course and it helped to change os.environ[“HUGGINGFACEHUB_API_TOKEN”] to os.environ[“API_TOKEN”].

I’ll try it out and report back. Thanks for the suggestion.

I too faced the same issue. Changing the permissions of the token to write solved the issue. Please try and confirm.

4 Likes

Changing permission to “write” worked for me.

1 Like

how can i change the permission to write

Hey did you get how to change the permission?

Hmm I changed mine to write, yet I am still running into the issue.

profile > settings > Access Tokens
Create a new Access Token with WRITE permission and use that new token. Changing the permission on an already existing token doesn’t seem to work. Just create a new token.

hey, that didn’t work for me either. anyone else having the same issue?

If you run the chain.run(“msg”) and shows an error then

  1. go to hugging face account → setting → Access Token
  2. Edit Access Token Permissions
  3. Under the Repositories check box Write access to contents/settings of all repos under your personal namespace.
  4. Under the Inference check box Make calls to the serverless Inference API
  5. Save and retry to run your code.