I am running several models in production using the huggingface library to perform some tasks continuously. Unfortunately, the “continuously” part is very much lacking.
This is due to the fact that very often (every couple of hours), the following error pops-up:
requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/.../resolve/main/tokenizer.json or config.json
I have even tried to avoid this error by running transformers in offline mode using:
Unfortunately, the systeem still seems to request the model from the web, though its cached and offline mode. Any ideas how to avoid this problem? (except for saving model offline)