How to log predictions from evaluation set after each Trainer validation to wandb?

I want to monitor my model predictions on validation set not only using metrics but also on few examples by inspecting quality of individual examples.

I am struggling with two aspects.

  1. How to get only fixed subset of examples to be logged?
    Can I use more than one evaluation dataset and dedicate one to be logged to wandb?

  2. Where the logging code should live?
    In Trainer.Callback? In a fake metric?

Thank you

Ondra

1 Like

Have you been able to figure it out?

This could be helpful Hugging Face Transformers | Weights & Biases Documentation