Two SEP Tokens added by microsoft/codebert-base

Hey Guys,
I have a quick question about the microsoft/codebert-base tokenizer. Why are two </s> tokens added to separate two strings?

My used Code:

from transformers import AutoTokenizer, AutoModel
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

checkpoint = "microsoft/codebert-base"

model = AutoModel.from_pretrained(checkpoint)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)

from datasets import load_dataset

raw_dataset = load_dataset('json', data_files='/home/<user>/Data/<DataDir>/dataset_v1.jsonl', split='train')

def toke(example):
    return tokenizer(example["sentence1"], example["sentence2"]) 

tokenized_dataset = raw_dataset.select(list(range(10000))).map(toke, batched=True)

print(tokenized_dataset[7]['sentence1'])
print(tokenized_dataset[7]['sentence2'])
print(tokenized_dataset[7]['input_ids'])

train_nan_df.head()
test_df[‘ImageId’] = np.array(os.listdir (‘…/input/test_images/’))
[0, 21714, 1215, 10197, 1215, 36807, 4, 3628, 43048, 2, 2, 21959, 1215, 36807, 48759, 8532, 28081, 44403, 5457, 46446, 4, 30766, 1640, 366, 4, 8458, 41292, 31509, 49445, 46797, 73, 21959, 1215, 39472, 73, 108, 35122, 2]

Thanks in advance!