Utilizing GPU in masked language modelling in Tensorflow

I am doing a masked language modelling task using distilbert model

This is the tokenizer that I trained:
tokenizer = t.PreTrainedTokenizerFast(tokenizer_file='path/to/tokenizer.json', unk_token="[UNK]", pad_token="[PAD]", cls_token="[CLS]", sep_token="[SEP]", mask_token="[MASK]")

I am also using huggingface datasets library to preprocess my data so everything is in a standard way as shown in this tutorial: Fine-tuning a masked language model - Hugging Face Course .

I’m not getting any errors and the model is training well.

But my complaint is that the training is not leveraging the GPU on my machine(I found this by opening the task manager + the training of 1 epochs is taking 3hrs). I even tried by enabling the GPU access from NVIDIA Control Panel but it did not help. I have NVIDIA GTX 1650 on my machine with CUDA 12.0(Found this by using the nvidia-smi command in CLI).

Please help me understand my mistake.