Fine tune for question answering tasks on personal laptop

I have prepared a small FAQ dataset of questions and answers in JSON form, and following the alpaca dataset format, created a text column as my training data. I now wish to fine tune a model using this dataset, and the goal is question-answering chatbot.

For that, I tried to follow Quickstart guide on Supervised Fine-tuning Trainer. I got up to the trainer.train() step, but then my kernel crashed repeatedly. I saw from task manager that my CPU is hitting 100% for a long time, but memory was just about 60% and GPU (AMD Ryzen 7 5700U) is barely around 10%.

I’m using Inspiron 5415 on Windows 11 and trying this training process on Ubuntu WSL. I understand I have the options for PEFT and etc. which I shall definitely try, but I want to know why the kernel is crashing before ofloading some of the CPU tasks to GPU. Is there some configuration I need to do (either in the quickstart tutorial code or in the system settings), or this just does not work that way?