qwen_image_edit_fp8_e4m3fn.IRFpfsqH.safetensors.part
This takes hours. 1.9Mb/sec for 19gb is in my opinion very frustrating.
Why is this so slow, and is there a solution for this?
qwen_image_edit_fp8_e4m3fn.IRFpfsqH.safetensors.part
This takes hours. 1.9Mb/sec for 19gb is in my opinion very frustrating.
Why is this so slow, and is there a solution for this?
Downloading via HF CLI (with XET enabled) is usually faster than using a browser, though it’s rare for browser downloads to be significantly slow…
It’s possible your ISP or network route is imposing speed restrictions.
It’s slow because you’re fetching a single 20.4 GB model file over the Hub’s Xet backend, often via one browser stream, across a congested CDN path, while your disk must assemble many chunks. That combination bottlenecks network and I/O. The specific file is indeed ~20.4 GB and stored with Xet. (Hugging Face)
# Linux/macOS
pip install -U "huggingface_hub[cli]" hf_xet # docs: https://huggingface.co/docs/huggingface_hub/en/guides/cli
export HF_HOME="/fast-ssd/.cache/huggingface" # docs: https://huggingface.co/docs/huggingface_hub/en/package_reference/environment_variables
export HF_XET_HIGH_PERFORMANCE=1 # docs: https://huggingface.co/docs/huggingface_hub/en/package_reference/environment_variables
# If your download disk is an HDD, prefer sequential writes:
# export HF_XET_RECONSTRUCT_WRITE_SEQUENTIALLY=1 # same env vars doc
# Pull exactly the diffusion weight into ComfyUI
hf download Comfy-Org/Qwen-Image-Edit_ComfyUI \
--include "split_files/diffusion_models/qwen_image_edit_fp8_e4m3fn.safetensors" \
--local-dir "/path/to/ComfyUI/models/diffusion_models"
# CLI --local-dir reference: https://huggingface.co/docs/huggingface_hub/en/guides/cli
Why this helps: Xet high-performance mode increases concurrent range gets and CPU usage; cache and Xet writes land on a faster SSD; CLI avoids single-stream browser limits. (Hugging Face)
Windows (PowerShell):
pip install -U "huggingface_hub[cli]" hf_xet # CLI docs ↑
setx HF_HOME "D:\hf_cache" # env vars docs ↑
setx HF_XET_HIGH_PERFORMANCE 1 # env vars docs ↑
# For HDD targets only:
# setx HF_XET_RECONSTRUCT_WRITE_SEQUENTIALLY 1
hf download Comfy-Org/Qwen-Image-Edit_ComfyUI `
--include "split_files/diffusion_models/qwen_image_edit_fp8_e4m3fn.safetensors" `
--local-dir "C:\ComfyUI\models\diffusion_models"
aria2c)# Get the file’s “Copy download link” from the HF file page
# (example file page: https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI/blob/main/split_files/diffusion_models/qwen_image_edit_fp8_e4m3fn.safetensors)
aria2c -x16 -s16 -j4 -c "https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_edit_fp8_e4m3fn.safetensors?download=true"
# aria2/parallel guide: https://dev.to/susumuota/faster-and-more-reliable-hugging-face-downloads-using-aria2-and-gnu-parallel-4f2b
Users consistently report higher throughput with multi-connection pulls on large Hub assets. (DEV Community)
# Temporarily force fallback from hf-xet
export HF_HUB_DISABLE_XET=1 # Windows: setx HF_HUB_DISABLE_XET 1
# Also consider bumping timeouts for flaky links
export HF_HUB_DOWNLOAD_TIMEOUT=60
These toggles are documented and help isolate whether the Xet path or your route is the culprit. (Hugging Face)
If speed is stuck, switch network egress (VPN/off-VPN or another POP). Reports show material gains after changing region. (Hugging Face Forums)
.part name, that’s just a partial file while downloading. Use the direct file page to confirm size before retrying. (Hugging Face)aria2c and verify that throughput jumps above your current ~1.9 MB/s. If not, try the VPN step and the Xet disable toggle. (Hugging Face)--local-dir. (Hugging Face)