In a nutshell, “it’s highly likely that the progress bar’s behavior is simply misleading.”
Yes. It is most likely resuming, not restarting from zero.
Why the first line matters more
The important signal is the line that says the .incomplete file is being resumed from 49645653524/49907246508. That means Hugging Face already sees 49,645,653,524 bytes of the target file on disk. That is about 99.48% of the file, so only about 249.5 MiB remained. The downloader source also explicitly short-circuits when the already-present size equals the expected size, which confirms that partial-size tracking is a real part of the logic, not just a cosmetic message. (GitHub)
Why the second line still says 0%
That 0% does not necessarily mean “started over.” Two things can make it look that way.
First, rounding: 9.38M / 49.9G is only about 0.019% of the full file, so a progress bar that shows whole percentages will still print 0%. Second, the current Hugging Face source treats progress differently depending on the download path: the regular HTTP downloader initializes the bar with initial=resume_size, but the Xet downloader initializes it with initial=0. So a resumed download can still look like it restarted, even when prior bytes were recognized and reused. (GitHub)
The background behind this
This is easier to understand once you know how current Hugging Face downloads work. In huggingface_hub v1.0, the old resume_download parameter was removed, because resume is supposed to happen automatically when possible. Hugging Face also says all repositories on the Hub are Xet-enabled and hf_xet is now the default transfer path. Xet is chunk-based rather than just “one plain HTTP stream from byte 0 to byte N,” and Hugging Face keeps both a file cache and a chunk cache locally. (Hugging Face)
So, is it resumed or not?
My answer is: yes, probably resumed.
More precisely:
- the first line is strong evidence that partial data was found and is being used,
- the second line is weak evidence because the progress display can under-report resumed state,
- and the early
0% is fully compatible with a resumed session when only a tiny fraction of the total file has been transferred in that visible session so far. (GitHub)
The catch
“Resumed” does not mean “guaranteed perfect.”
There are current bug reports around large-file downloads where resume behavior is unreliable, Xet transfers error out on slow links, or the final file may still need verification. One 2025 bug report shows hf_xet failures and low speeds on a slow residential connection. A current 2026 issue argues that large-file downloads still lack robust partial-corruption recovery and chunk-level validation. So your log pattern is most likely benign, but the general area is not completely free of real bugs. (GitHub)
What I would do
1. Let it continue if it is still moving
If the byte count keeps increasing, I would not delete the partial file just because the bar says 0%. The cache is designed specifically to avoid re-downloading data unnecessarily. (Hugging Face)
2. Increase the timeout
The default HF_HUB_DOWNLOAD_TIMEOUT is 10 seconds. On slow or unstable links, that is easy to trip. Hugging Face explicitly says increasing it helps on slow connections. (Hugging Face)
export HF_HUB_DOWNLOAD_TIMEOUT=60
hf download google/gemma-4-26b-a4b-it --token <token>
3. Verify the result after it finishes
This is the cleanest way to settle the question “did the resumed download produce a good file?” Hugging Face now documents hf cache verify, which checks local files against Hub checksums. (Hugging Face)
hf cache verify google/gemma-4-26b-a4b-it
If you downloaded into a custom folder:
hf cache verify google/gemma-4-26b-a4b-it --local-dir /path/to/download
4. Inspect the environment if it keeps acting strange
hf env is the command Hugging Face recommends for issue reports because it prints the machine setup and relevant downloader configuration. (Hugging Face)
hf env
5. If the Xet path seems to be the problem, disable it once as a diagnostic
Hugging Face documents HF_HUB_DISABLE_XET=1 to force-disable hf-xet. That is a reasonable troubleshooting step if repeated resumed downloads still stall or behave oddly. Also, the source says basic HTTP is blocked only for files over 50GB, and your logged file size is just under that threshold, so trying one non-Xet run is feasible here. (Hugging Face)
export HF_HUB_DISABLE_XET=1
hf download google/gemma-4-26b-a4b-it --token <token>
6. If your cache is on an HDD or awkward storage, adjust for that
Hugging Face says hf-xet is designed for SSD/NVMe-style parallel writes. If you are on a spinning disk, HF_XET_RECONSTRUCT_WRITE_SEQUENTIALLY=1 can help by switching to sequential writes. (Hugging Face)
export HF_XET_RECONSTRUCT_WRITE_SEQUENTIALLY=1
The clean mental model
Use this rule:
- “resume from X/Y” tells you whether prior bytes were detected.
- The progress bar tells you how the current session is being visualized.
hf cache verify tells you whether the finished file is trustworthy. (GitHub)
So for your log, the best answer is:
Yes, it is very likely resumed.
The 0% line is most likely just a misleading early progress display, helped by rounding and by how the current Xet progress bar is initialized. (GitHub)