Trainer do not log validation loss and metrics

Thanks for this reply, I was facing a similar issue.
I haven’t tried it yet to see if it solves my solution as well.
If you don’t mind me asking a clarifying question, is the argument label_names to the trainer dependent on what model one uses or is it a generic argument regardless of the underlying model that is being used?

As beginner with hf library on one hand we are all grateful that it exists on the other hand IMHO it’s a mess, it didn’t copy the good practices from other libraries like scikit or pytorch.

e.g. from transformers import X where X can be anything under the sun, instead of having a more structured approach from transformers.models import X, from transformers.tokenizers import Y, from transformers.datasets import Z, etc.

Aso , the whole AutoModelXYZ is utterly confusing, it would have been much more clear if we only had from transformers.models import Model, ModelConfig and then in the ModelConfig one defines whatever task they are interested in.

1 Like