How to freeze BERT weights

I’m currently trying to train a token classification BERT model following this tutorial. It says that instead of adding weight_decay on the model’s weights, one could just freeze all weights except for the final linear classification layer. I want to do that because I have a very little dataset. But I’m not sure how to code that, because the Adam Optimizer wants an Iterable of weight parameters.
Can you show me what should be changed in the following code?

FULL_FINETUNING = True
if FULL_FINETUNING:
    param_optimizer = list(model.named_parameters())
    no_decay = ['bias', 'gamma', 'beta']
    optimizer_grouped_parameters = [
        {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
         'weight_decay_rate': 0.01},
        {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
         'weight_decay_rate': 0.0}
    ]
else:
    param_optimizer = list(model.classifier.named_parameters())
    optimizer_grouped_parameters = [{"params": [p for n, p in param_optimizer]}]

optimizer = AdamW(
    optimizer_grouped_parameters,
    lr=3e-5,
    eps=1e-8
)

Thank you very much for your help!