Greetings,
I have followed the post to train a new model at How to train a new language model from scratch using Transformers and Tokenizers.
The result of that training is that made a model RobertaForMaskedLM that was then uploaded to huggingface model Andrija/SRoBERTa
(For now this is just a experimental run with smaller dataset and 1 epoch)
However I am interested how would I train the model such as when I load the model and choose “NER” aka “token-classification” in this instance it would be a RobertaForTokenClassification) as pipeline that it won’t give a warning that it is not trained (which I know it is not).
How would I accomplish such a task? I feel I am asking a stupid question, however if you could help I would appreciate it.
Should I just load the model and “finetune” just to train the head
I found some relevant post and I follow the same things and tried to accomplish that colab code
However I am am getting:
Any hints are welcome!
Best regards,
Andrija