BART - Input two sentences?

Hi! I would like to understand a bit better how BART handles a multiple sentences.
I got that I can encode two sentences with tokenizer(sent_a, sent_b).
My first sentence contains a <mask> symbol that is to be filled. However, I noticed that the second sentence - as opposed to the first sentence - isn’t part of the output (which is okay in my case, but I wonder why). In addition, it seems that the second sentence doesn’t really have an influence on how the <mask> token is replaced, so it’s not really considered as a context, and seems to even confuse the model. Can I actually input two sentences if I’m aiming for the mask filling task? Would it make sense to finetune for it?

input_ids = tokenizer.encode(sent_a, sent_b, return_tensors="pt").to('cuda:0')
tokenizer.batch_decode(model.generate(input_ids))