How to add a new input layer to BERT / RoBERTa?

Hello, I learned that we can modify Class BertEmbeddings https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L166 to use an additional 768-dimension input layer except for the original 3 input layers.

I am wondering is there a convenient way to modify the original class like using an API? Or should I just modify the original .py file? Is there some difference when I’m using RoberTa?

Thank you for your help.