How to set the Python version in Hugging Face Space?

I previously deployed my model on Hugging Face Spaces, and it ran successfully with good user experience for several months. However, it seems that there have been some recent changes to Spaces, and the default Python version has been set to Python 3.10, which is causing my model deployment to fail consistently. Below is the error message:

  File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/peft/peft_model.py", line 514, in __init__
    super().__init__(model, peft_config)
  File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/peft/peft_model.py", line 79, in __init__
    self.base_model = LoraModel(peft_config, model)
  File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/peft/tuners/lora.py", line 118, in __init__
    self._find_and_replace()
  File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/peft/tuners/lora.py", line 152, in _find_and_replace
    "memory_efficient_backward": target.state.memory_efficient_backward,
AttributeError: 'MatmulLtState' object has no attribute 'memory_efficient_backward'

Has anyone encountered this issue?

  1. Is the error caused by incompatibility with the Python 3.10 version?
  2. How can I set the Python version in Hugging Face Spaces to my previous version, Python 3.8?
1 Like

This?

python_version: string
Any valid Python 3.x or 3.x.x version.
Defaults to 3.10.
1 Like

Thank you. I tried modifying the README.md file, but the Python version did not change and remains 3.10:

---
title: XXX
emoji: 🦀
colorFrom: yellow
colorTo: red
sdk: gradio
sdk_version: 4.36.1
app_file: app.py
pinned: false
license: apache-2.0
python_version: "3.8.19"
---
2024-06-12
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

What should I do? I can’t pinpoint the issue right now. Perhaps I need to test with Python 3.10 locally to identify the exact cause.

1 Like

Perhaps this?

python_version: 3.8.19
1 Like

Hi. For me helped next instructions:

!python -m pip install --upgrade pip
!pip install --upgrade peft gcsfs
!pip install --upgrade bitsandbytes
!pip install --force-reinstall numpy==1.26.4

The problem has appeared just recently after updates of peft and bitsanbytes.

2 Likes

Due to an upgrade of CUDA or bitsandbytes versions, my model deployed on Hugging Face was not running properly. The error message was as follows:

================================ERROR=====================================
CUDA SETUP: CUDA detection failed! Possible reasons:
1. CUDA driver not installed
2. CUDA not installed
3. You have multiple conflicting CUDA libraries
4. Required library not pre-compiled for this bitsandbytes release!
CUDA SETUP: If you compiled from source, try again with `make CUDA_VERSION=DETECTED_CUDA_VERSION` for example, `make CUDA_VERSION=113`.
CUDA SETUP: The CUDA version for the compile might depend on your conda install. Inspect CUDA version via `conda list | grep cuda`.
===========================================================================

CUDA SETUP: Something unexpected happened. Please compile from source:
git clone git@github.com:TimDettmers/bitsandbytes.git
cd bitsandbytes
CUDA_VERSION=123
python setup.py install
CUDA SETUP: Setup Failed!
CUDA SETUP: Something unexpected happened. Please compile from source:
git clone git@github.com:TimDettmers/bitsandbytes.git
cd bitsandbytes
CUDA_VERSION=123
python setup.py install
...
  File "/home/user/miniconda3/envs/py310/lib/python3.10/site-packages/bitsandbytes/functional.py", line 17, in <module>
    from .cextension import COMPILED_WITH_CUDA, lib
  File "/home/user/miniconda3/envs/py310/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 22, in <module>
    raise RuntimeError('''
RuntimeError: 
        CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs above to fix your environment!
        If you cannot find any issues and suspect a bug, please open an issue with details about your environment:
        https://github.com/TimDettmers/bitsandbytes/issues

Initially, I thought the issue was due to py3.10, so I wanted to downgrade the Python version of the space to py3.8 but wasn’t sure how to do it. Thanks to @John6666 for pointing me to the official documentation, which clarified that I can set the Python version for the space by using the python_version parameter.

After setting the Python version to 3.8 as per the documentation, the space still failed, which helped me realize that the issue wasn’t with the Python version. Thanks to @geophysuni for sharing his solution, which led me to suspect that the issue might be related to the versions of peft and bitsandbytes. I then searched for related issues online:

RuntimeError: CUDA Setup failed despite GPU being available. · Issue #1434 · bitsandbytes-foundation/bitsandbytes · GitHub
Unable to override PyTorch CUDA Version · Issue #1315 · bitsandbytes-foundation/bitsandbytes · GitHub

Although following their advice to set bitsandbytes>=0.43.2 and bitsandbytes==0.44.1 still resulted in errors, when I coincidentally set bitsandbytes==0.41.0, the error stopped appearing. Note that after setting bitsandbytes==0.41.0, I also needed to install the scipy package, and my model successfully deployed and ran on the space.

In summary, the CUDA version on Hugging Face is 12.3, which is not compatible with either the older bitsandbytes version (0.37.0) or the latest version (0.45.0). The correct version to use is 0.41.0.

Finally, I want to thank everyone for their help in identifying and resolving the issue.

1 Like

In summary, the CUDA version on Hugging Face is 12.3, which is not compatible with either the older bitsandbytes version (0.37.0) or the latest version (0.45.0). The correct version to use is 0.41.0.

Nice info!