502 server error when running model

Hi everyone,

I am running several models in production using the huggingface library to perform some tasks continuously. Unfortunately, the “continuously” part is very much lacking.

This is due to the fact that very often (every couple of hours), the following error pops-up:

requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/.../resolve/main/tokenizer.json or config.json

I have even tried to avoid this error by running transformers in offline mode using:


Unfortunately, the systeem still seems to request the model from the web, though its cached and offline mode. Any ideas how to avoid this problem? (except for saving model offline)

1 Like

A colleague of mine and I have recently experienced a similar issue. Even if the models are cached locally, a momentary internet disconnection results in an error if it coincides with a model/tokenizer load in a series of training scripts. My thoughts on potential reasons go no further than speculations so, any suggestions on what might be the issue and how to solve it is pretty much welcome.

I’m facing the same problem. Reported here: HTTP 502 Bad Gateway for url