I’m trying to train the Qwen2.5-7B-Instruct via AutoTrain, but after starting the training I get an error:
RuntimeError: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at Installation Guide.
How can I fix it? That is, I don’t use python at all (I have it, I also have CUDA 12.0 with cuDNN 9.4), and I don’t have any code.
I’m sorry if this is a stupid question
Anyone else also getting this error? I’m new to this and tried setting up in Pycharm and Windows 11. Already installed Tensorflow 2.0, transformers, torch using pip. I downloaded Llama from huggingface and I get this error when I try to run.
The folder path in my code is set this way - bot = Llama3(“C:/Users/RAH/PycharmProjects/LLM/meta-llama/”)
Could this be a folder path issue or an issue with CUDA? Within the folder “C:/Users/RAH/PycharmProjects/LLM/meta-llama” I have 2 sub folders - Llama-3.3-70B-Instruct and Meta-Llama-3-8B-Instruct. Should I be pointing to one of those folders in the code as opposed to the higher level folder?
*** Error Message Below ***
File “C:\Users\RAH\PycharmProjects\LLM.venv\Lib\site-packages\transformers\integrations\bitsandbytes.py”, line 537, in _validate_bnb_cuda_backend_availability
raise RuntimeError(log_msg)
RuntimeError: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at Installation Guide