Hey, i ran into an issue with new docker image support on spaces … and not sure, if this is standard behaviour, or specific to HF.
I am building an app that needs neural_renderer. When i add this requirement to the Dockerfile it fails during build with No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'. After some trial-error i figured that i can install neural renderer after app start.
So i assume that there is no GPU backing during docker build time and it only becomes available after container start. Sounds like a common problem … or am i missing something obvious?
Here is a howto for a NVIDIA GPU-backed runtime. This allows one to build the container image on your local system (or any other GPU system), submit it to Docker hub and then inherit from there during build on HF.
I had a couple of errors. I thought the first one was due to the absence of GPU during the installation, but it turned out that it was actually second.
I fixed them both by:
Running RUN apt update && apt install -y libpython3.10-dev before RUN python setup.py install --user
Replacing RUN python setup.py install --user with RUN TORCH_CUDA_ARCH_LIST=Turing python setup.py install --user (added TORCH_CUDA_ARCH_LIST=Turing)
Thank you for the clarification, @kopyl. Apologies for the delay, @fjenett. You’re correct about Docker’s buildtime in Spaces, it doesn’t provide access to GPU hardware. Thus, any GPU-related commands shouldn’t be executed during your Dockerfile’s build step. For instance, commands like nvidia-smi or torch.cuda.is_available() can’t be run while building an image. @fjenett, your suggestion to build a wheel or a container on a GPU supported system and then loading it into Spaces later is excellent.