Chapter 7 questions

I got confusion about some code snippet of " Fine-tuning a masked language model" part. Why do we need to repeat “loss” in below code ?

for step, batch in enumerate(eval_dataloader):
        with torch.no_grad():
            outputs = model(**batch)

        loss = outputs.loss
        losses.append(accelerator.gather(loss.repeat(batch_size)))

    losses = torch.cat(losses)
    losses = losses[: len(eval_dataset)]
    try:
        perplexity = math.exp(torch.mean(losses))
    except OverflowError:
        perplexity = float("inf")

If it’s about the adding samples as this thread said.