Reproduce RoBERTa Using Huggingface Transformers

I’m trying to reproduce RoBERTa pre-training results using run_mlm.py provided by huggingface transformers. However, I’m confused how exactly should the script be called. Here is my script:

python transformers/examples/pytorch/language-modeling/run_mlm.py \
    --config_name roberta-base \
    --tokenizer_name roberta-base \
    --dataset_name wikitext, bookcorpus, ccnews, openwebtext, stories \
    --dataset_config_name wikitext-2-raw-v1 \
    --per_device_train_batch_size 8 \
    --per_device_eval_batch_size 8 \
    --do_train \
    --do_eval \
    --output_dir ./crate/ckpt \
    --overwrite_output_dir

Is there anyone already successfully reproduced RoBERTa by this means? Looking forward to any possible help!