huggingface_hub.errors.HfHubHTTPError: 494 Client Error: for url: https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/resolve/main/config.json

This randomly starting popping up 20 minutes ago. At the same time somehow my access to the model got revoked so I had to apply again. I got access again but somehow I still cant run it. HTTP 494.

What’s up with that ?

B.

INFO 11-29 09:30:59 __init__.py:42] No plugins found.
ERROR 11-29 09:30:59 engine.py:366] 494 Client Error:  for url: https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/resolve/main/config.json
ERROR 11-29 09:30:59 engine.py:366] Traceback (most recent call last):
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
ERROR 11-29 09:30:59 engine.py:366]     response.raise_for_status()
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/requests/models.py", line 1024, in raise_for_status
ERROR 11-29 09:30:59 engine.py:366]     raise HTTPError(http_error_msg, response=self)
ERROR 11-29 09:30:59 engine.py:366] requests.exceptions.HTTPError: 494 Client Error:  for url: https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/resolve/main/config.json
ERROR 11-29 09:30:59 engine.py:366] 
ERROR 11-29 09:30:59 engine.py:366] The above exception was the direct cause of the following exception:
ERROR 11-29 09:30:59 engine.py:366] 
ERROR 11-29 09:30:59 engine.py:366] Traceback (most recent call last):
Process SpawnProcess-1:
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
ERROR 11-29 09:30:59 engine.py:366]     engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 11-29 09:30:59 engine.py:366]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 114, in from_engine_args
ERROR 11-29 09:30:59 engine.py:366]     engine_config = engine_args.create_engine_config(usage_context)
ERROR 11-29 09:30:59 engine.py:366]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 993, in create_engine_config
ERROR 11-29 09:30:59 engine.py:366]     model_config = self.create_model_config()
ERROR 11-29 09:30:59 engine.py:366]                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 921, in create_model_config
ERROR 11-29 09:30:59 engine.py:366]     return ModelConfig(
ERROR 11-29 09:30:59 engine.py:366]            ^^^^^^^^^^^^
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/config.py", line 214, in __init__
ERROR 11-29 09:30:59 engine.py:366]     hf_config = get_config(self.model, trust_remote_code, revision,
ERROR 11-29 09:30:59 engine.py:366]                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/config.py", line 173, in get_config
ERROR 11-29 09:30:59 engine.py:366]     if is_gguf or file_or_path_exists(
ERROR 11-29 09:30:59 engine.py:366]                   ^^^^^^^^^^^^^^^^^^^^
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/config.py", line 94, in file_or_path_exists
ERROR 11-29 09:30:59 engine.py:366]     return file_exists(model, config_name, revision=revision, token=token)
ERROR 11-29 09:30:59 engine.py:366]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
ERROR 11-29 09:30:59 engine.py:366]     return fn(*args, **kwargs)
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.12/dist-packages/requests/models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 494 Client Error:  for url: https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/resolve/main/config.json

The above exception was the direct cause of the following exception:

ERROR 11-29 09:30:59 engine.py:366]            ^^^^^^^^^^^^^^^^^^^
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/hf_api.py", line 2907, in file_exists
Traceback (most recent call last):
ERROR 11-29 09:30:59 engine.py:366]     get_hf_file_metadata(url, token=token)
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
ERROR 11-29 09:30:59 engine.py:366]     return fn(*args, **kwargs)
  File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 368, in run_mp_engine
    raise e
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
    engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 114, in from_engine_args
ERROR 11-29 09:30:59 engine.py:366]            ^^^^^^^^^^^^^^^^^^^
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/file_download.py", line 1296, in get_hf_file_metadata
ERROR 11-29 09:30:59 engine.py:366]     r = _request_wrapper(
ERROR 11-29 09:30:59 engine.py:366]         ^^^^^^^^^^^^^^^^^
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/file_download.py", line 277, in _request_wrapper
ERROR 11-29 09:30:59 engine.py:366]     response = _request_wrapper(
ERROR 11-29 09:30:59 engine.py:366]                ^^^^^^^^^^^^^^^^^
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/file_download.py", line 301, in _request_wrapper
ERROR 11-29 09:30:59 engine.py:366]     hf_raise_for_status(response)
ERROR 11-29 09:30:59 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status
ERROR 11-29 09:30:59 engine.py:366]     raise _format(HfHubHTTPError, str(e), response) from e
ERROR 11-29 09:30:59 engine.py:366] huggingface_hub.errors.HfHubHTTPError: 494 Client Error:  for url: https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/resolve/main/config.json
    engine_config = engine_args.create_engine_config(usage_context)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 993, in create_engine_config
    model_config = self.create_model_config()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 921, in create_model_config
    return ModelConfig(
           ^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/config.py", line 214, in __init__
    hf_config = get_config(self.model, trust_remote_code, revision,
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/config.py", line 173, in get_config
    if is_gguf or file_or_path_exists(
                  ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/config.py", line 94, in file_or_path_exists
    return file_exists(model, config_name, revision=revision, token=token)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/hf_api.py", line 2907, in file_exists
    get_hf_file_metadata(url, token=token)
  File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/file_download.py", line 1296, in get_hf_file_metadata
    r = _request_wrapper(
        ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/file_download.py", line 277, in _request_wrapper
    response = _request_wrapper(
               ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/file_download.py", line 301, in _request_wrapper
    hf_raise_for_status(response)
  File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status
    raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 494 Client Error:  for url: https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/resolve/main/config.json
1 Like

Error 494 is a very rare HTTP error. It seems that it sometimes occurs when there is a problem with the SSL-related settings, but as far as I can tell from my searches, you are the first person to encounter it in relation to HuggingFace.:sweat_smile: