Sentences' embeddings from BERT cross-encoder

Hello,

Does it make sense to forward pass a sentence pair to BERT to get the maximum iterations between all tokens and subsequently get the vectors corresponding to each sentence?