I am using transformer PyTorch examples to do Machine Translation. I am computing multiple metrics on large validation sets.
how is it possible to Integrate distributed compute_metrics with with
seq2seqTrainer? (with or without
predict_with_generate( just for during
do_eval) ) I took a look at distributed usage of
load_metric, however not sure if it is possible to integrate it without implementing custom train/evaluation loop.
how is it possible to have two sets on