[LMM Fine Tuning] Supervised Fine Tuning Trainer (SFTTrainer) vs transformers Trainer

When should one opt for the Supervised Fine Tuning Trainer (SFTTrainer) instead of the regular Transformers Trainer when it comes to instruction fine-tuning for Language Models (LLMs)? From what I gather, the regular Transformers Trainer typically refers to unsupervised fine-tuning, often utilized for tasks such as Input-Output schema formatting after conducting supervised fine-tuning. There seem to be various examples of fine-tuning tasks with similar characteristics, but with some employing the SFTTrainer and others using the regular Trainer. Which factors should be considered in choosing between the two approaches?

Thank you!