Extra eos and sos tokens in model generated output

When I call inference manually on a model (in this case kabita-choudhary/finetuned-bart-for-conversation-summary), the generated outputs all start with a load of SOS tokens. Am I calling the model incorrectly?

>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("kabita-choudhary/finetuned-bart-for-conversation-summary")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("kabita-choudhary/finetuned-bart-for-conversation-summary")

>>> input_text = 'Mick: Are you going to be late?\nJohn: Yes, I think so'
>>> tokenized_text = tokenizer(input_text)['input_ids']
>>> input_tokens = torch.tensor([tokenized_text])

>>> output_tokens = model.generate(input_tokens)
>>> tokenizer.batch_decode(output_tokens)

['</s><s><s><s>John is going to be late.</s>']