AugsBERT with Cross-Encoder in both steps?


in augmented sBERT there is a bi enconder in step 2 which is finetuned on gold and silver training data. I was wondering if it is possible to finetune a cross enconder instead in step 2 because they archive better performance. My purpose is to find similar sentences out of a small dataset.