A few minutes ago, several Zero GPU spaces suddenly crashed, so I tried rebuilding them, and it seems that the PyTorch version has become a little more flexible. This is probably an intentional fix…
===== Build Queued at 2025-08-25 08:37:20 / Commit SHA: e548b67 =====
--> FROM docker.io/library/python:3.10.13@sha256:d5b1fbbc00fd3b55620a9314222498bebf09c4bf606425bf464709ed6a79f202
DONE 0.0s
--> RUN apt-get update && apt-get install -y git git-lfs ffmpeg libsm6 libxext6 cmake rsync libgl1 && rm -rf /var/lib/apt/lists/* && git lfs install
CACHED
--> RUN pip install --no-cache-dir pip -U && pip install --no-cache-dir datasets "huggingface-hub>=0.19" "hf_xet>=1.0.0,<2.0.0" "hf-transfer>=0.1.4" "protobuf<4" "click<8.1" "pydantic~=1.0" torch==2.4.0
CACHED
--> RUN apt-get update && apt-get install -y fakeroot && mv /usr/bin/apt-get /usr/bin/.apt-get && echo '#!/usr/bin/env sh\nfakeroot /usr/bin/.apt-get $@' > /usr/bin/apt-get && chmod +x /usr/bin/apt-get && rm -rf /var/lib/apt/lists/* && useradd -m -u 1000 user
CACHED
--> WORKDIR /home/user/app
CACHED
--> RUN wget --progress=dot:giga https://developer.download.nvidia.com/compute/cuda/12.9.0/local_installers/cuda_12.9.0_575.51.03_linux.run -O cuda-install.run && fakeroot sh cuda-install.run --silent --toolkit --override && rm cuda-install.run
CACHED
--> COPY --chown=1000:1000 --from=root / /
CACHED
--> RUN apt-get update && apt-get install -y curl && curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && apt-get install -y nodejs && rm -rf /var/lib/apt/lists/* && apt-get clean
CACHED
--> Restoring cache
DONE 198.8s