Understanding model params in Finetuning Wav2vec2Bert for ASR

In the blog post: Fine-Tune W2V2-Bert for low-resource ASR with :hugs: Transformers

The folllowing params are configured for loading the pretrained model:

from transformers import Wav2Vec2BertForCTC

model = Wav2Vec2BertForCTC.from_pretrained(
    "facebook/w2v-bert-2.0",
    attention_dropout=0.0,
    hidden_dropout=0.0,
    feat_proj_dropout=0.0,
    mask_time_prob=0.0,
    layerdrop=0.0,
    ctc_loss_reduction="mean",
    add_adapter=True,
    pad_token_id=processor.tokenizer.pad_token_id,
    vocab_size=len(processor.tokenizer),
)

Aside from a brief explanation on setting the dropout, there isn’t much explanation as to what the roles are of the other parameters, and what could be played with in the finetuning process.
For example, why the add_adapter parameter? Mask Time prob? I don’t usually see these used in other model blogs, so whats the intuition for adding them here?

Thank you