How to instantiate a XLMRobertaTokenizer object using a locally trained SentencePiece tokenizer

I have trained a SentencePiece tokenizer locally and I would like to instantiate an XLMRobertaTokenizer object based on it. Can someone please help me understand how this is done?

What I have at the moment are two files: my_sp.model and my_sp.vocab