How to frozen the attention map in BERT

I would like to use customized attention map in a pertained BERT model, is there a good way to do it?

For example, I have a predefined customized attention map which has high attention value (such as 0.5) between the word on position 1 and position 5, then I would use that attention map during testing.

In testing, no matter what input sentences I feed to BERT, the attention value between the word on position 1 and position 5 will always be 0.5 since the predefined customized attention map I used is setted up that way.

The whole process does not require training. The pertained BERT model is already avalible.

Do anyone knows how to do it?

Thank you so much.