Fine-tuning BERT Model on domain specific language

Hi everyone

I want to further fine-tune a BERT Model on domain specific language as done in https://arxiv.org/pdf/1903.10676.pdf or https://arxiv.org/abs/1908.10063. If I understood correctly, I have to use the same vocabulary as the original pre-trained model or have to train it from scratch. Since I don’t want to train the model form scratch I have to accept the fact that I have to use the same vocab. My first fine-tuning step is to adapt the model to the domain specific language, where I feed the model some (unlabeled) domain specific text (large dataset) for it to get familiar with the language (freezing some layers during training to prevent forgetting of the pre-trained corpus). Secondly, I want to further fine-tune it for sentiment classification giving the model labeled data (smaller dataset) to train on.

Can anyone help me on how to do that (both steps)? Thank you very much in advance. :innocent:

2 Likes

hey has anyone an idea?