Iβm using the Hugging Face CLI tool to download large models, but I frequently encounter download interruptions. Recently, when attempting to concurrently download both deepseek-v3 and Qwen models, Qwen downloaded successfully, but deepseek-v3 failed with this error:
Traceback (most recent call last):
File "/usr/local/bin/huggingface-cli", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/commands/huggingface_cli.py", line 57, in main
service.run()
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/commands/download.py", line 153, in run
print(self._download()) # Print path to downloaded files
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/commands/download.py", line 187, in _download
return snapshot_download(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/_snapshot_download.py", line 296, in snapshot_download
thread_map(
File "/usr/local/lib/python3.10/dist-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/usr/local/lib/python3.10/dist-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
File "/usr/local/lib/python3.10/dist-packages/tqdm/std.py", line 1181, in __iter__
for obj in iterable:
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
return fut.result(timeout)
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/_snapshot_download.py", line 270, in _inner_hf_hub_download
return hf_hub_download(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py", line 842, in hf_hub_download
return _hf_hub_download_to_local_dir(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py", line 1138, in _hf_hub_download_to_local_dir
_download_to_tmp_and_move(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py", line 1547, in _download_to_tmp_and_move
http_get(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py", line 483, in http_get
raise EnvironmentError(
OSError: Consistency check failed: file should be of size 4302350190 but has size 2053432448 (model-00009-of-000163.safetensors).
This is usually due to network issues while downloading the file. Please retry with `force_download=True`.
Has anyone experienced similar issues? Iβve already used the force_download=True
parameter in the download settings. how have you resolved this?
Can you recommend more reliable download tools or methods? For models that are tens of GBs in size, whatβs your approach to ensuring stable and complete downloads?
Thank you for any suggestions and help!