Can I finetune BART based model for text auto-encoder task?
I want to pass a sentence to the encoder and learn representation and using that representation I want to generate the same text using decoder.
BART has been trained for this task, but how can I finetune this model ?