Bert NER model start and end position None after fine-tuning

HI All,

I have fine-tuned a BERT NER model to my dataset. The model that I am fine-tuning is “dslim/bert-base-NER”. I have been successfully able to train the model using the following script as refrence:
https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/BERT/Custom_Named_Entity_Recognition_with_BERT_only_first_wordpiece.ipynb#scrollTo=zPDla1mmZiax

The prediction form the base model contained the start and end position of the word in the original text like:
{‘entity_group’: ‘ORG’, ‘score’: 0.9992545247077942, ‘word’: ‘A’, ‘start’: 10, ‘end’: 11}
{‘entity_group’: ‘ORG’, ‘score’: 0.998507097363472, ‘word’: ‘##bc Corp Ltd’, ‘start’: 11, ‘end’: 22}

While the prediction from the re-trained model is:
{‘entity_group’: ‘ORG’,
‘score’: 0.747031033039093,
‘word’: ‘##7’,
‘start’: None,
‘end’: None},
{‘entity_group’: ‘ORG’,
‘score’: 0.9055356582005819,
‘word’: ‘Games , Inc’,
‘start’: None,
** ‘end’: None**}

I am passing the position ids to the model during the training process. I looked at the model training parameters but, could not find a way to pass start and end position of the words to model training process. I have the start and end position of the tokenized words.