Initialising BERT Model

My task here is to perform NER on a dataset I have. I think the default entities are B-ORG, I-ORG, B-PER and so on (I am not sure if these are default). I am using AutoModelForTokenClassification, and the transformer model (“bert-base-multilingual-uncased”).
I want the entities to be like (B-COLLEGE, I-COLLEGE, B-NAME, I-NAME and so on) in total im getting 8 entities “O” not included.
How can I train the model to detect these entities?
Will the model be trained on these entities when I train the model using a dataset or will I i have to initialize the model using these entities?