Multilingual NER pretrained model fine tuning

Hallo
I want to fine tun a pretrained model for named entity recognition. I have chosen Roberta base to fine tun, I had the problem that the dataset contains some Russian words, which the model is not able to understand while the Tokenisation. I thought about adding these words (it depends on how many word I have) to the tokenizer, but I am not sure if the model will understand that those are ORG, OTH etc. so what do you recommend as a solution for this case. The dataset is .tsv file includes German words with numbers and labels I have made them in pandas data frame index, token, label then I get the sentence based on the boundars for example if index 1 this is a start of sentence and if index 1 this is end of the last sentence and beginning of new one. This is one of the sentences : (NEWSru.ua / : Политисполком СПУ отказал Морозу в отставке Die SPU legte wie auch vier weitere Parteien beim Obersten Gericht ’
'der Ukraine Beschwerde gegen den Ablauf der Wahl ein und behauptet es sei bei der Auszählung zu Unregelmäßigkeiten gekommen .) all the Russian words are supposed to be I-OTH in the truth label.