I’ve attempted to restart and perform a factory build multiple times, yet the build error persists. I’ve also researched and tried solutions from similar issues, but to no avail.
I deleted the Gradio version specification in the requiements.txt file and changed the Gradio version specification in the README.md file to 4.x. The behavior changed a bit, but the git clone still fails…
I wonder if it has something to do with the fact that the result of Space’s automatic virus scan is unsafe? It might not be related.
Edit:
I’ve looked at the commit history and there doesn’t appear to be any git-related changes in the past month. Something about a spec change in the HF space or a bug.
I can’t tell which git clone is failing, which is a problem to work around.
I commented out the Dockerfile stuff and it still happens.
I found similar symptoms, but they do not seem to be exactly the same situation.
This is a very troublesome error, I can only get a “503” in Dev mode, I tried moving it to Zero GPU space and got a build error with no information.
You might want to ask the staff on the HF Discord.
In the case of these strange errors, the cause is usually that something is wrong with the Spaces-related settings or server settings. The user can only work around it by using a roundabout way, not a solution. However, if the error message does not provide this much information, there is no way to work around it…
Usually error messages are not meant to harass, but to debug…
My guess is that this error is not due to any recent changes you have made to Spaces, but that you got caught up in some change on the HF side. You probably can’t get around it without changing something fundamentally.
There are plenty of Spaces that use Dockerfile or github resources, so that’s not the only cause. I really don’t know.
CPU space storage is 50 GB, so large files can cause capacity errors. But in that case, of course, the disk space error will show up first.
To begin with, this is crashing before the requiements.txt and others are referenced…even if you write a git lfs in packages.txt, the program is not going to download the file. I guess that’s why it’s not going that far.
I guess a workaround would be to put the large files in the model repo or dataset repo and download them when needed, but that’s a lot of work if they are in a deep path. If the symptom is this, it’s 90% likely a bug in the Spaces git lfs. I’ve joined Discord too, so I’ll report it there when I get a chance.
If it’s a model or dataset repo, you can upload a 50GB file up to 300GB per repo, or even more than 300GB if you get permission, although I don’t know how. And while Spaces storage is slow, model and repo storage is fast. There is no way not to use this. I can’t not use it… but it requires programming… yay more work!
Update: Resolved (might be buggy) by moving the large LFS model files from space storage to Huggingface’s model hub for downloads. Keeping the space compact is the key to successful builds.