How to deal with tokenizer out of memory in run_clm.py

I have looked everywhere and there seems to be nothing. I am using the language modeling example script to finetune bloom-560m on my own dataset and I am stuck on out of memory error with 13GB RAM on kaggle when I use even a remotely large file. Here is the command I use

! python run_clm.py \
    --model_name_or_path navaaesarosh/saqi_v0 \
    --train_file  /kaggle/input/urdu-classics/urdu_classics.txt \
    --per_device_train_batch_size 1 \
    --per_device_eval_batch_size 1 \
    --do_train \
    --do_eval \
    --do_predict \
    --num_train_epochs 3.0 \
    --save_total_limit 2 \
    --output_dir saqi_v0.5/ \
    --report_to wandb \
    --run_name bloom-560m-urduclassics \
    --load_best_model_at_end \
    --save_strategy steps \
    --save_steps 1000 \
    --eval_steps 1000 \
    --evaluation_strategy steps