Why we need to add special tokens to tasks other than classification?

Hello,

while tokenizing, is there any reason to add special tokens (<s>, </s>) if the task is NOT text/sentence classification ?

I am using RoBERTa model to have tokens embedding. I will use these embedding later to my specific task.

More generally,
I have text longer than 512 tokens and I need the embedding for each token. Thus I am splitting the text to segments where each segment is up to 512 tokens. Is there any reason to add to each segment the models special tokens?

Thanks.