Hello,
My account name is yusukemori.
(I’m who asked about Seq2seq Trainer at https://github.com/huggingface/transformers/issues/7740 .
Thank you for giving me a quick, kind, and detailed reply then!)
This is my first time posting on this forum and I apologize if I’m being rude.
I’m now trying to find how to use my original “[mask]” for pre-training/fine-tuning the Seq2seq model.
I’m wondering how to randomly assign [mask] at training time (per epoch) and how to get Seq2seq Trainer to load it.
As far as BERT is concerned, there seems to be a BERTProcessor in huggingface/tokenizers, which is not intended to be used with the Seq2seq Trainer, if I understand correctly.
Is there any processer, such as “BARTProcessor”, for the Seq2seq Trainer?
Please allow me to ask one more question, is there a code for pre-training (from scratch) BART with the Seq2seq Trainer? (For fine-tuning, thanks to the clear example!)
I’m afraid this is a beginner’s, rude question.
Thank you for your help.
Sincerely,
yusukemori