Fine tuning with apple m3

I am trying to do a fine tuning using peft, but i noticed that bitsandbytes doesn’t work on apple m3, it works fine with google colab (cuda with T4). Is there some workaround to quantize a model in apple m3 architecture?