Regarding GGUF Quantize model

I have use PEFT lora technique to fine tune a mistral model i used bit and byte for my quantization so wanted to where its saving ,mu quantize model and how i can use quantize model with my adapter layer. if any reference please share article