Deepspeed ZeRO2, PEFT, bitsnbytes training

How can one train with a bitsnbytes quantized model, a PEFT config, and DeepSpeed? The docs don’t seem to cover this.

My training finishes successfully but then I have no pytorch_model.bin. Because I am using PEFT, is there no .bin? Really just looking for a way to get the trained model, merge whatever adapters I need and produce a some final directory :pray: