Report metric per sample on evaluate

Hi All,
How are you?
Working on TTS problem I would like to report WER per sample and save it as DataFrame at the end of the evaluation step. The DataFrame columns would be: sample name / path | prediction | target | WER.
I tried to use a custom callback but the metrics are already aggregated and I do not want to perform the evaluation twice. Another approach would be to re-write the Trainer’s evaluation loop but we want to avoid that.
What are my options?