Using xlm-roberta-large-finetuned-conll03-german for entity mapping

For my master thesis I’d like to try to use this model, transform it into a Longformer and then fine-tune it with my own data.

The fine-tuning task should map tagged PER and SKILL (will come to that specific tag is a second) which look like they belong together. By that I mean that in an input text (format CoNLL-2003) it is written that a PER used skills like java, python, tensorflow, or any technology you can think of. So the model should follow that this PER knows how to use these skills. If there are other PER and SKILL mentioned in the text, they should not be mapped with the first PER, unless the text says so.

So here are my questions:
I saw that this model is built for tokenizing. But is it still a general language model? Or for my special task, should I use another german model? If so, do you know any model which can read in tagged text in CoNLL-2003 format?
And as I mentioned, there is a tag called SKILL in the input text. But this model doesn’t know this kind of tag, so is it even possible to read in tagged text? Or doesn’t it matter?