Should fine-tuning alway be 'Supervised Fine-tuning'

I am in the process of fine-tuning an open-source model, because I am doing a ‘FULL FINE-TUNING’ rather than LoRA or QLoRA is it necessary to break down all my text into a Q/A format for this type of fine-tuning or the model would learn my data after its weights have been updated by by data?

Please help.