What is the way to encode several separate sentences with SEP tokens for categorization utilizing a pre-trained BERT model, considering that there should be maximum two [SEP] tokens are allowed? Specifically, how do I format it as: [CLS]sent1[SEP]sent2[SEP]sent3[SEP]sent4[SEP]?
Are the sentences part of the same input (e.g., different sentences of a paragraph) or unrelated?
They are unrelated and not from the same paragraph.