Load_in_8bit vs. loading 8-bit quantized model

I’ve been using QLoRA, and was wondering if there was an issue with the quantization part.
It turns out the problem was the dataset I was using (token count for some entries was too large and caused OOM error)