I need to calculate loss for batch of sentence but when i do this I get only average loss for all the sentences not all the loses for each sentence separatelly.
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained('distilgpt2')
loss_list = model(input_ids, labels=input_ids)
Is it possible to somehow calculate the loss of each sentence separatelly in one batch or I have to use only batch-size=1 which will be much slower.