TRANSFORMER_CACHE issue inside jupyter notebook

Hi, I’m excited to start trying out the huggingface Transformers. I followed the installation instructions with a virtual environment installation in a directory A. The system wise default cache directory is in my home directory, B. There is a strict disk space limit on B. So I changed the TRANSFORMERS_CACHE to somewhere under A where there is no disk limit.

This works fine under command line. I then installed a ipykernel with my virtual env into jupyter notebook. Here things get werid. I checked and made sure the TRANSFORMERS_CACHE is indeed still pointing to A. However, every time I start to use some pretrained model, it will still look for cache in B, not A.

Anyone knows how to solve this basic issue?

Same issue here, I manage to solve it making sure that I load the env variables before any import to transformers. For example using
from dotenv import load_dotenv load_dotenv()

and a .env file to configure HUGGINGFACE_HUB_CACHE var, that’s because jupyter maybe not load the same env variables that are declared in a …rc file.