I’m rather new to BERT and I have used
bert-base-uncased to fine-tune a model for a sequence classification task in Pytorch.
The model is providing good results for classification now, but each class has subclasses which I am trying to predict. For instance, C1 consists of subclasses 1,2 and 3.
The question is if there is a way to fine-tune a model for the text that can predict each major class and then categorize the subclass for the related major class? Is this possible with BERT?