Fine-tune multilingual model to classify languages other than training set

Hi all, I have an annotated multi-class text classification dataset for a couple of languages (NL/DE/EN), a few thousand instances each. I’d like to leverage a pretrained multilingual transformer model to also be able to classify texts from languages for which I don’t have training data. Would it work if I’d fine tune, let’s say XLMRoberta on my NL/DE/EN multi-class training data, i.e. would I be able to apply the resulting classifier to languages other than NL/DE/EN? Any tips and pointers are very welcome. Thanks a lot!