Where did eval_preds in compute_metrics function come from?

In Chapter 3 of the Huggingface NLP course, Section Fine Tuning a Model with the Trainer API, I see this concise function, which calls eval_preds:

def compute_metrics(eval_preds):
metric = evaluate.load(“glue”, “mrpc”)
logits, labels = eval_preds
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)

I can possibly guess what this is doing, but is this an object that becomes automatically available during training? It would definitely help to update the material explaining this.

Thanks for the awesome course.