Fine-tuning BERT NSP with specific examples

Hi, I would like to fine-tune an off-the-shelf BERT model originally trained with the NSP objective (among others). Normally, NSP gets a continuous document and a set of random sentences. My goal is to use the model for the NSP task, but I would like to fine-tune it with specific examples, like these:

(Sentence A, Sentence B, False)
(Sentence A, Sentence C, True)
(Sentence D, Sentence E, False)

and so on.

I’m currently implementing this by adding a classification layer with multiple inputs on top of the BERT layers, and while this may work, (1) I’m sure there’s a simpler way to fulfill this seemingly obvious use case, (2) I’m actually not sure that I’ll get the best results by adding the new layers!

Does anyone know of a simple way to fine-tune NSP with specific yes/no samples?