Fine tuning using llm Qlora

I am trying to fine tune T5 for question and answering task using Qlora. But I think I am doing something incorrect as the time and space taken by the qlora model is same while fine tuning the full model.
I have imported the model using the bnbconfig and added lora to it using loraconfig, (the model.print_trainable_parameter gives 0.08 % trainable parameters).
How do I check if only the lora parameters are getting updated?