Colab error (memory crashes)

I have this trainer code on a sample of only 10,000 records, still the GPU runs out, I am using Google Colab pro, before that it didnt happen with me, something wrong in my code, please see

from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments

training_args = TrainingArguments(

    output_dir='/content/drive/My Drive/results/distillbert',          # output directory

    overwrite_output_dir= True,

    do_predict= True, 

    num_train_epochs=3,              # total number of training epochs

    per_device_train_batch_size=4,  # batch size per device during training

    per_device_eval_batch_size=2,   # batch size for evaluation

    warmup_steps=1000,                # number of warmup steps for learning rate scheduler

    save_steps=1000,

    save_total_limit=10,

    load_best_model_at_end= True,

    weight_decay=0.01,               # strength of weight decay

    logging_dir='./logs',            # directory for storing logs

    logging_steps=0,

)

model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-cased")

trainer = Trainer(

    model=model,                         # the instantiated 🤗 Transformers model to be trained

    args=training_args,                  # training arguments, defined above

    train_dataset=train_dataset,         # training dataset

    eval_dataset=val_dataset             # evaluation dataset

)

How big is each record? How big is it after tokenization?
Are you using a data-loader? What batchsize is it using?

What happens if you you try to train using only 10 records?

thanks, it is resolved, after I made following changes
per_device_train_batch_size=4, # batch size per device during training
per_device_eval_batch_size=2,

not sure if it is good approach?
thanks

I handled the problem by getting the vectors of the dataset at different batches not at once.