Did something change recently regarding caching / ETags?

I have being using using a cached model lmstudio-community/Qwen3-235B-A22B-GGUF:Q8_0 with llama.cpp for a few weeks, and today, although nothing changed on my side, and apparently the model has not been updated either, llama.cpp started downloading it again today instead of using the download cache.

This is quite annoying, as it will take about two hours to download it again on my Internet connection, and it will probably be identical to the previously cached version.

From what I see, the ETag changed, it doesn’t even have the same length as before, so I’m guessing some HuggingFace software update/upgrade or change caused this.

This part of the output from llama C++ at the time it started re-downloading the model:

0.00.351.392 I curl_perform_with_retry: Trying to download from https://huggingface.co/lmstudio-community/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B-Q8_0-00001-of-00007.gguf (attempt 1 of 3)...
0.00.906.601 W common_download_file_single: ETag header is different ("2ac7a128fb9139f7e07d64f96f97c72a-1000" != "360b1ab43ac40095873b3a8cdc13509be3c8872ee7c1d15fe5abd9db3a9fd867"): triggering a new download
0.00.906.608 W common_download_file_single: deleting previous downloaded file: /home/vlad/.cache/llama.cpp/lmstudio-community_Qwen3-235B-A22B-GGUF_Qwen3-235B-A22B-Q8_0-00001-of-00007.gguf
1 Like

I wonder why…
It doesn’t seem like there have been any model changes, so maybe there has been a specification change on the Hub side.