Training BERT with new tokenizer and vocabulary

Hi there :slight_smile:

If we want to train BERT using MLM, with a new tokenizer and custom vocabulary, then all we have to do is to change vocab_size in BertConfig? I mean changing vocab_size from the default value (30522) to our custom vocabulary length?

Thanks for all your responses! :smiley: