What is the difference between T5 and BART model?

I understand that they are both encoder-decoder seq2seq models, with slightly different pretraining objectives. (Also T5 can be trained for multiple tasks at the same time, while I’m not sure about BART)

So essentially, they are just very similar models? What are the differences.