Sentence embeddings generated by RoBERTa

Hi, I am trying to understand how sentence embeddings are generated in RoBERTa. What I’ve understood so far is that in BERT, sentence embeddings are created by training a CLS token using the Next Sentence Prediction (NSP) task. However, in RoBERTa the NSP objective is dropped, but we still have sentence embeddings. I have failed to find a resource that explains how we can still have sentence embeddings while that objective is dropped. I am new to the subject, so there might be many misunderstandings on my part, so any clarification will be helpful and appreciated :slight_smile: