I've been trying for several days to set up TensorRT for accelerating inference of the DeepSeek-R1-Distill-Qwen-32B model in Hugging Face space, but I'm facing a series of dependency conflicts

For now, about 5.

You’re correct about Docker’s buildtime in Spaces, it doesn’t provide access to GPU hardware. Thus, any GPU-related commands shouldn’t be executed during your Dockerfile’s build step. For instance, commands like nvidia-smi or torch.cuda.is_available() can’t be run while building an image.