Can I load a model fine-tuned with LoRA 4-bit quantization as an 8-bit model?

Or do I have to load it as a 4-bit each time? Thanks.