Validation loss is none while training using pytorch training loop

Hi! I am fine-tuning a Mask2FormerForUniversalSegmentation model from a pre-trained checkpoint using a custom pytorch training loop. I am able to see the loss during the train step, i.e. when model.train() is set. But the loss is None in the validation step, i.e. when model.eval() is set and within torch.no_grad() or torch.inference_mode() context.

What am I missing? Any ideas on what I could do to get the validation loss?

1 Like