We have been using the Huggingface Spaces to host a model demo, utilizing the A10 GPU instance. Until recently, everything was functioning as expected and our demo was running smoothly.
However, we have suddenly encountered an issue where our demo ceased to operate. Upon investigation, we found out that we are no longer able to utilize CUDA with PyTorch on our instance. Specifically, when we run the code torch.cuda.is_available(), it returns False. Moreover, the error message suggests that the issue might be related to an outdated version of the CUDA driver.
Anyone knows of a solution to this problem?