How to use custom positional embedding while fine tuning Bert

Hi, I want to fine-tune Bert for question answering but instead of using the built-in position embeddings (which are the indices of the tokens), I want to add my own positional embedding (for example the sinusoidal positional embedding mentioned in the paper “Attention is all you need”). I understand one way to do that would be to modify the input embeddings in the BertModel class, but I am not sure how to modify that class. If there are better ways, I would be happy to follow. Any suggestions would be highly appreciated.