I have trained bert model for multi-label text classification for 15+ classes. So sometime, on inference side, it is not able to predict as expected for some sentence / paragraph.
So, currently I am fine-tuning entire model with train data + extra data. Is ther any approach, where I can add only new data instead of everything. So time can be saved.