Hi,
Some of my Gradio spaces that were working previously are no longer functioning. The first issue seems to be related to the Debian 13 update: my Gradio spaces were likely initially deployed with Debian 12.
After trying the workaround suggested by john6666, one of my older spaces restarted, but it now gets stuck with a different Python error.
For another space deployed with Docker, I modified the Dockerfile to specify the Debian and Python versions:
FROM python:3.11-slim-bookworm
# Instead of: FROM python:3.11-slim
This change was intended to use Python 3.11 with Debian 12 (Bookworm), as the default python:3.11-slim
now uses Debian 13 (Trixie).
However, it initially returned an error:
E: Package 'libgl1-mesa-glx' has no installation candidate
After fixing the package error, the space no longer shows that issue, but it gets stuck during the build stage after:
Building wheel for llama-cpp-python (pyproject.toml): started
It get in TimeOut.
The same issue occurs in a third space that was working today until I changed its name (which triggered a rebuild). Now, it also gets stuck at the same build stage.
For my older spaces deployed automatically with Gradio, it would be ideal if, during a rebuild, the versions of the OS, Python, Gradio, and other essential dependencies remained the same as those used during the initial deployment. This would help avoid failures during restarts or rebuilds.
Note: I know that versions can be specified in requirements.txt
(though not the base OS container).
My Questions:
-
For my Hugging Face Spaces that were automatically deployed for Gradio, is there a way to find out the versions of the OS, Python, and the main packages/dependencies used? This would allow me to specify or lock those versions by simply editing the
requirements.txt
file. -
Is there a solution to stay on, for example, Debian 12 with Python 3.10 during a rebuild for spaces deployed without a Dockerfile?
-
Regarding the current error:
Building wheel for llama-cpp-python (pyproject.toml): started
Does specifying a version of
llama-cpp-python
that can be downloaded like other libraries (without needing to build a wheel) seem like the only solution?
Thank you for your feedback!