I am trying to do a fine tuning using peft, but i noticed that bitsandbytes doesn’t work on apple m3, it works fine with google colab (cuda with T4). Is there some workaround to quantize a model in apple m3 architecture?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Training On Mac M3 Max.. blazing fast but | 3 | 7653 | December 24, 2023 | |
Quantizing a model on M1 Mac for qlora | 0 | 1481 | March 14, 2024 | |
Inference 8 bit or 4 bit bit models on cpu? | 2 | 3014 | August 3, 2023 | |
Fine tuning of ctransformers model | 0 | 138 | March 11, 2024 | |
Peft following bits and bytes seems to have no effect on LLM | 0 | 486 | January 31, 2024 |