CUDA out of memory when using Trainer with compute_metrics

Recently, I want to fine-tuning Bart-base with Transformers (version 4.1.1). The fine-tuning process is very smooth with compute_metrics=None in Trainer. However, when I implement a function of computing metrics and offer this function to Trainer, I received the CUDA out of memory error during the evaluation stage.

I want to try this feature, so my implementation is straightforward.

def compute_metrics(pred):
    preds = pred.predictions
    labels = pred.label_ids
    print(preds.shape, labels.shape)
    return {
        'loss': 1
    }

Because when compute_metrics=None, the training process is normal, so I think it can’t be the problem of batch size. But I still tried a smaller batch size, but even if I set the batch size to 1, the situation is the same. I even try tinier-bart, but I received the same error at last.

One thing caught my attention :thinking: . When I set a tiny batch size, the memory will not fill up at once, but the occupancy rate has increased until the memory can’t hold more data. That is, the processed data is not released in time. Is there any magic operation to solve this problem?

2 Likes

When computing metrics inside the Trainer, your predictions are all gathered together on the device (GPU/TPU) and only passed back to the CPU at the end (because that operation can be slow). If your dataset is large (or your model outputs large predictions) you can use eval_accumulation_steps to set a number of steps after which your predictions are sent back to the CPU (slower but uses less device memory). This should avoid your OOM.

3 Likes

I have tried to use eval_accumulation_steps, but another problem occurs.

Here is part of my fine-tuning code:

args = TrainingArguments(
    output_dir="exp/bart/results",
    do_train=True,
    do_eval=True,
    evaluation_strategy="steps",
    eval_steps=1000,
    logging_dir="exp/bart/logs",
    num_train_epochs=1,
    per_device_train_batch_size=32,
    per_device_eval_batch_size=32,
    gradient_accumulation_steps=2,
    eval_accumulation_steps=1,
)

trainer = Trainer(
    model=bart,
    args=args,
    data_collator=collate_fn,
    train_dataset=train_set,
    eval_dataset=eval_set,
    tokenizer=tokenizer,
    compute_metrics=compute_metrics,
)

Besides, the number of lines in the evaluation set is 22,161. When I set eval_accumulation_steps=1, I receive:

MemoryError: Unable to allocate 149. GiB for an array with shape (22162, 36, 50265) and data type float32

It means that the Trainer will apply for all the space at once. Do I need to set other parameters to ensure that the Trainer applies for the right space every time?

This error means you are trying to get predictions that just don’t fit in RAM, so there is nothing Trainer can do to help. I don’t know which bart models you’re using, but it looks like you have huge logits so you should split your evaluation dataset in small parts or use a custom evaluation loop.

1 Like

Thank you for this answer! It helped me a lot!

Hi, I’m still struggling with this issue. I’m trying to finetune a Bart model and while I can get it to train, I always run out of memory during the evaluation phase. This does not happen when I don’t use compute_metrics, so I think there’s an issue there - when I don’t use compute_metrics I can run batch sizes of up to 16, however on using compute metrics, I can’t even use a batch size of 1 even with eval accumulation.

Could you please explain why compute metrics is so much heavier when I can run training and evaluation without issues otherwise? In your answer above you mentioned that the trainer holds all predictions on the GPU but why is this being done for metrics calculation?

I have used Fairseq for seq2seq tasks with similarly sized models before this, and have never run into this issue before, so I was also wondering if they do metrics computation differently.

2 Likes

I also experience this when including my own compute_metrics implementation, and it gradually increases GPU memory occupation over time.

Could it be that data structures (tensors I assume) used in our own implementation with each estimation are filling up GPU space and this is overloading our GPU device, and somehow default implementation is using memory garbage collector better? Should we somehow dump memory from variables not being used anymore over time? It seems variables are not being dumped after the compute_metrics() function is done? @sgugger

4 Likes

i was getting this same error so now i am doing this eval_accumulation_steps=16 with per_device_eval_batch_size=1, but i am now getting error “your ram collapsed bcoz you have used up the available ram” in google colab, any more help will be appreciated, i am using colab pro with 15 GB available GPU and model size 2.12 GB Pegasus,dataset Dialogsum, my train batch size is also 2

were you able to solve your problem?