Is there any documentation on the parameter that can be passed to the tokenisors via from_pretrained

I’m not sure if corpus is the correct word, but the parameter that gets passed to from_pretrained as ‘pretrained_model_name_or_path’. I can’t find any documentation on what names can be passed to this. I am only just starting with BERT models so are pretty naive. I would like to compare the performance of DistilBert to RoBERTa. DistilBert and Bert accept “{distil}bert-base-uncased” which uses uncased(lower-cased) tokens. RoBERTa, as far as I understand, uses a different tokenisation model, so is not compatible with the Bert tokenisation.

There is a “roberta-base” used in the documentation examples, but if I try to use “roberta-base-uncased” it isn’t recognised. Without documentation I am not sure what the uncased equivalent is for RoBERTa. Can anyone help me?