Monte Carlo Dropout for NLP Trainer

Hi, I am training a NLP task for text classification (roberta-base) on a large dataset following the book ‘Natural Language Processing with Transformers, Chapter 2’ as a rough guide (https://transformersbook.com/).
I was wondering if there was a way to implement Monte Carlo dropout into this model to obtain uncertainty estimates? Is there a simple way to do this or would it be easier to take a different approach to train the network?
Thanks