I am using transformer PyTorch examples to do Machine Translation. I am computing multiple metrics on large validation sets.
-
how is it possible to Integrate distributed compute_metrics with with
seq2seqTrainer
? (with or withoutpredict_with_generate
( just for duringdo_eval
) ) I took a look at distributed usage ofload_metric
, however not sure if it is possible to integrate it without implementing custom train/evaluation loop. -
how is it possible to have two sets on
do_predict
?
Thanks,