The default behavior of
Trainer(...) when evaluating model is disabling Dropout. Concretely,
M runs will be exactly the same
for i in range(M): logits, labels, metrics = trainer.predict(tokenized_datasets["eval"]) y_pred = np.argmax(logits, axis=2) ...
Now I am trying to apply Monte Carlo Dropout trick introduced this this answer. This requires to turn the Dropout on while making predictions on the validation set.
I am wondering how I achieve this goal. Any input is appreciated