Is it possible to use multiple transformer models in a Pipeline?

In HF Pipeline, I see there is only one model and tokenizer in the Pipeline.
But, is it possible to use multiple transformer models in one Pipeline?

What I am trying to build is Bi-Encoder consists of two bert-based encoders (let’s say E1 and E2).
The input sentences are (s1 and s2) natural language pairs.
I will encode s1 with E1 and s2 with E2.
After that, I will compare the similarity between two encoded vectors.
E1 and E2 will use the same architecture (like roberta and roberta).

Is it possible to exploit HF Pipeline ? Or do I need to build it just from torch.nn.Module?

Please give me any suggestions :slight_smile: