Llama2 finetuning for summarization mlsum

hello, i am trying to finetune llama2-7b model on a german dataset for the summarization task. Everything works fine, however in the trainer part when i try to compute the rouge metrics for the valuation dataset, i get a 3 dimensional array from the model and the labels are two dimensional.

Does anyone have any idea how i can transform the model output in to the right format to compute the rouge metric between it and the label sentence ?