How to add a new input layer to BERT / RoBERTa?

Hello, I learned that we can modify Class BertEmbeddings to use an additional 768-dimension input layer except for the original 3 input layers.

I am wondering is there a convenient way to modify the original class like using an API? Or should I just modify the original .py file? Is there some difference when I’m using RoberTa?

Thank you for your help.