Hi everybody. I am trying to set up transformers with llama2-7b. However I got this error while starting my application.
Traceback (most recent call last):
File "E:\python\llama2-local\main.py", line 10, in <module>
model = AutoModelForCausalLM.from_pretrained(name, cache_dir="./cache/", token=auth_token, torch_device=torch.float16,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ikono\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\models\auto\auto_factory.py", line 563, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ikono\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\modeling_utils.py", line 3165, in from_pretrained
hf_quantizer.validate_environment(
File "C:\Users\ikono\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\quantizers\quantizer_bnb_4bit.py", line 62, in validate_environment
raise ImportError(
ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`
Keep in mind: I have the newest version of both libraries installed but for some reason it still does not work. Can anybody help me? Thans!