(Auto) Sequence Classification model with triplets / contrastive loss

Hi,
I am trying to train a cross-encoder and/or bi-encoder fine-tuned on a custom data set with about 30k entries. This takes place in a search context, and the annotations are query-document pairs each labeled as relevant (positive) or irrelevant (negative).

In order to train a text classification model using the query-document pairs, I have been following the “Sequence Classification with IMDb Reviews” guide.

This is how I encode my data for a simple text classifier (“relevant” vs “irrelevant”) like this:

def encode(examples):
  return tokenizer(
      examples['queryTerm'],
      examples['text'],
      truncation=True,
      padding='max_length',
    )

I want to proceed by training a cross-encoder using contrastive loss using -- triplets (with and annotated with different classes), as discussed in the Sentence BERT paper, among others.
I am wondering about the internals of the Auto model for sequence classification. Does it make sense to adapt my encode() function so that is calls the tokenizer roughly like this:

tokenizer(queryTerm, example1["text"], example2["text"])

Furthermore, can I use an Auto model for training a bi-encoder, again trained on triplets? What is the recommended approach for this use case?

Hi @carschno . Did you figure out any way to proceed with this ?