Invalid credentials in Authorization header... Please help

Tried literally everything. Tried changing Api keys (Used Read, Write, Finegrained - with permissions). Tried changing models (TinyLlama, MetaLlama2, Deepseek)… Looked for other community posts… TRIED EVERTHING… Still ending with “Invalid credentials in Authorization header”

This is my original code:
from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint
from dotenv import load_dotenv
import os

load_dotenv()

llm= HuggingFaceEndpoint(
repo_id=“TinyLlama/TinyLlama-1.1B-Chat-v1.0”,
task=“text-generation”,
huggingfacehub_api_token=os.getenv(“HUGGINGFACEHUB_API_TOKEN”)
)
print(“Loaded token:”, os.getenv(“HUGGINGFACEHUB_API_TOKEN”))

model = ChatHuggingFace(llm=llm)
model.invoke(“What is the capital of India?”) # Example usage

All required environments are installed.
Please help

1 Like

Hmm… I think this is how you can check the validity of the token itself.

from dotenv import load_dotenv
import os

load_dotenv()
print("Loaded token:", os.getenv("HUGGINGFACEHUB_API_TOKEN"))

from huggingface_hub import HfApi
api = HfApi()
print(api.whoami(os.getenv("HUGGINGFACEHUB_API_TOKEN")))

If the tokens itself are fine, then it’s probably a version issue or a bug with the LangChain or other libraries.
If the tokens aren’t working, then it’s probably something with your network environment or Hugging Face’s server.

1 Like
import logging
from huggingface_hub.utils._runtime import dump_environment_info
#####################################################

API_TOKEN = os.environ['HF_TOKEN']
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"

os.environ.setdefault('GRADIO_ANALYTICS_ENABLED', 'False')
os.environ.setdefault('HF_HUB_DISABLE_TELEMETRY', '1')

dump_environment_info()
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)

This is my simple “go-to” snippet for when a Space is acting up. Should provide all the context you need to make a somewhat better-informed next step… you can either come back a later time and hope it fixes itself, or continue trying to debug with more awareness on your immediate environment.

2 Likes

Great! Personally, I also recommend the HF_DEBUG environment variable.

import logging
from huggingface_hub.utils._runtime import dump_environment_info
#####################################################

API_TOKEN = os.environ['HF_TOKEN']
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"

os.environ.setdefault('GRADIO_ANALYTICS_ENABLED', 'False')
os.environ.setdefault('HF_HUB_DISABLE_TELEMETRY', '1')
os.environ.setdefault('HF_DEBUG', '1')

dump_environment_info()
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
1 Like