Is there a way to print the evaluation after each eval_steps

Essentially, my training log returns:

***** Running Evaluation *****
  Num examples: Unknown
  Batch size = 8
Saving model checkpoint to ./whisper-small/checkpoint-1000
Configuration saved in ./whisper-small/checkpoint-1000/config.json
Configuration saved in ./whisper-small/checkpoint-1000/generation_config.json
Model weights saved in ./whisper-small/checkpoint-1000/pytorch_model.bin
Feature extractor saved in ./whisper-small/checkpoint-1000/preprocessor_config.json
tokenizer config file saved in ./whisper-small/checkpoint-1000/tokenizer_config.json
Special tokens file saved in ./whisper-small/checkpoint-1000/special_tokens_map.json
added tokens file saved in ./whisper-small/checkpoint-1000/added_tokens.json
Step	Training Loss	Validation Loss	Wer
1000	0.337200	0.387359	38.595348
2000	0.305800	0.368565	36.393789
3000	0.301600	0.333480	50.099939
4000	0.289900	0.332969	66.476927
5000	0.276800	0.327825	84.689030
6000	0.257700	0.316814	102.301498

Is it possible for me to print the expected text and the text generated at each eval step?
The WER looks super high but I doubt it’s because of special characters and I just want to make sure.